Skip to content

Instantly share code, notes, and snippets.

@lukestanley
Created March 11, 2025 18:48
Show Gist options
  • Save lukestanley/f939b9fac2ecd2504fa679f38facc7f6 to your computer and use it in GitHub Desktop.
Save lukestanley/f939b9fac2ecd2504fa679f38facc7f6 to your computer and use it in GitHub Desktop.
2025-03-11 18:17:22.025 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717041924,"renderedWordCount":151,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n #","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.025 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 152 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.026 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 152 words and found 152 words. Total words in chunk: 216
2025-03-11 18:17:22.026 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 152 words. Rendering 152 words. Buffer: 64 words
2025-03-11 18:17:22.027 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.073 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042026,"renderedWordCount":152,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.074 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 152 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.074 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 152 words and found 152 words. Total words in chunk: 216
2025-03-11 18:17:22.074 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 152 words. Rendering 152 words. Buffer: 64 words
2025-03-11 18:17:22.075 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.123 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042026,"renderedWordCount":152,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 153 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 153 words and found 153 words. Total words in chunk: 216
2025-03-11 18:17:22.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 153 words. Rendering 153 words. Buffer: 63 words
2025-03-11 18:17:22.125 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.173 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042123,"renderedWordCount":153,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.174 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 153 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.174 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 153 words and found 153 words. Total words in chunk: 216
2025-03-11 18:17:22.174 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 153 words. Rendering 153 words. Buffer: 63 words
2025-03-11 18:17:22.175 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.223 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042123,"renderedWordCount":153,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.223 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 154 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.223 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 154 words and found 154 words. Total words in chunk: 216
2025-03-11 18:17:22.223 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 154 words. Rendering 154 words. Buffer: 62 words
2025-03-11 18:17:22.224 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.273 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042223,"renderedWordCount":154,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.274 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 154 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.274 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 154 words and found 154 words. Total words in chunk: 216
2025-03-11 18:17:22.274 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 154 words. Rendering 154 words. Buffer: 62 words
2025-03-11 18:17:22.274 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.323 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042223,"renderedWordCount":154,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.323 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 155 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.324 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 155 words and found 155 words. Total words in chunk: 216
2025-03-11 18:17:22.324 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 155 words. Rendering 155 words. Buffer: 61 words
2025-03-11 18:17:22.324 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.373 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042323,"renderedWordCount":155,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.374 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 155 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.374 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 155 words and found 155 words. Total words in chunk: 216
2025-03-11 18:17:22.375 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 155 words. Rendering 155 words. Buffer: 61 words
2025-03-11 18:17:22.375 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.423 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042323,"renderedWordCount":155,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.423 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 156 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.424 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 156 words and found 156 words. Total words in chunk: 216
2025-03-11 18:17:22.424 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 156 words. Rendering 156 words. Buffer: 60 words
2025-03-11 18:17:22.424 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.473 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042424,"renderedWordCount":156,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.474 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 156 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.474 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 156 words and found 156 words. Total words in chunk: 216
2025-03-11 18:17:22.475 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 156 words. Rendering 156 words. Buffer: 60 words
2025-03-11 18:17:22.475 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.523 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042424,"renderedWordCount":156,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.524 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 157 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.524 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 157 words and found 157 words. Total words in chunk: 216
2025-03-11 18:17:22.525 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 157 words. Rendering 157 words. Buffer: 59 words
2025-03-11 18:17:22.525 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.573 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042524,"renderedWordCount":157,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.574 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 157 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.574 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 157 words and found 157 words. Total words in chunk: 216
2025-03-11 18:17:22.574 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 157 words. Rendering 157 words. Buffer: 59 words
2025-03-11 18:17:22.575 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.623 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042524,"renderedWordCount":157,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.624 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 158 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.624 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 158 words and found 158 words. Total words in chunk: 216
2025-03-11 18:17:22.624 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 158 words. Rendering 158 words. Buffer: 58 words
2025-03-11 18:17:22.624 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.673 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042623,"renderedWordCount":158,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.673 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 158 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.674 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 158 words and found 158 words. Total words in chunk: 216
2025-03-11 18:17:22.674 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 158 words. Rendering 158 words. Buffer: 58 words
2025-03-11 18:17:22.674 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.723 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042623,"renderedWordCount":158,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.724 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 159 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.724 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 159 words and found 159 words. Total words in chunk: 216
2025-03-11 18:17:22.725 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 159 words. Rendering 159 words. Buffer: 57 words
2025-03-11 18:17:22.725 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.773 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042723,"renderedWordCount":159,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs =","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.774 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 159 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.774 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 159 words and found 159 words. Total words in chunk: 216
2025-03-11 18:17:22.774 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 159 words. Rendering 159 words. Buffer: 57 words
2025-03-11 18:17:22.775 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.823 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042723,"renderedWordCount":159,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs =","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.824 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 160 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.824 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 160 words and found 160 words. Total words in chunk: 216
2025-03-11 18:17:22.824 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 160 words. Rendering 160 words. Buffer: 56 words
2025-03-11 18:17:22.825 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.873 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042823,"renderedWordCount":160,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.874 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 160 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.874 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 160 words and found 160 words. Total words in chunk: 216
2025-03-11 18:17:22.874 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 160 words. Rendering 160 words. Buffer: 56 words
2025-03-11 18:17:22.875 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.923 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042823,"renderedWordCount":160,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.924 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 161 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.924 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 161 words and found 161 words. Total words in chunk: 216
2025-03-11 18:17:22.924 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 161 words. Rendering 161 words. Buffer: 55 words
2025-03-11 18:17:22.925 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:22.973 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042923,"renderedWordCount":161,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:22.973 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 161 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:22.973 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 161 words and found 161 words. Total words in chunk: 216
2025-03-11 18:17:22.974 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 161 words. Rendering 161 words. Buffer: 55 words
2025-03-11 18:17:22.974 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.023 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717042923,"renderedWordCount":161,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.024 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 162 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.024 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 162 words and found 162 words. Total words in chunk: 216
2025-03-11 18:17:23.024 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 162 words. Rendering 162 words. Buffer: 54 words
2025-03-11 18:17:23.025 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.073 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043023,"renderedWordCount":162,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.074 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 162 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.074 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 162 words and found 162 words. Total words in chunk: 216
2025-03-11 18:17:23.074 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 162 words. Rendering 162 words. Buffer: 54 words
2025-03-11 18:17:23.074 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.123 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043023,"renderedWordCount":162,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 163 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 163 words and found 163 words. Total words in chunk: 216
2025-03-11 18:17:23.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 163 words. Rendering 163 words. Buffer: 53 words
2025-03-11 18:17:23.125 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.173 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043123,"renderedWordCount":163,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.174 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 163 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.174 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 163 words and found 163 words. Total words in chunk: 216
2025-03-11 18:17:23.174 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 163 words. Rendering 163 words. Buffer: 53 words
2025-03-11 18:17:23.175 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.223 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043123,"renderedWordCount":163,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.224 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 164 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.224 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 164 words and found 164 words. Total words in chunk: 216
2025-03-11 18:17:23.224 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 164 words. Rendering 164 words. Buffer: 52 words
2025-03-11 18:17:23.225 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.273 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043223,"renderedWordCount":164,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.274 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 164 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.274 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 164 words and found 164 words. Total words in chunk: 216
2025-03-11 18:17:23.274 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 164 words. Rendering 164 words. Buffer: 52 words
2025-03-11 18:17:23.275 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.325 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043223,"renderedWordCount":164,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.325 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 165 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.325 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 165 words and found 165 words. Total words in chunk: 216
2025-03-11 18:17:23.325 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 165 words. Rendering 165 words. Buffer: 51 words
2025-03-11 18:17:23.326 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.373 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043323,"renderedWordCount":165,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.374 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 165 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.374 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 165 words and found 165 words. Total words in chunk: 216
2025-03-11 18:17:23.374 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 165 words. Rendering 165 words. Buffer: 51 words
2025-03-11 18:17:23.375 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.423 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043323,"renderedWordCount":165,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.423 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 166 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.424 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 166 words and found 166 words. Total words in chunk: 216
2025-03-11 18:17:23.424 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 166 words. Rendering 166 words. Buffer: 50 words
2025-03-11 18:17:23.424 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.473 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043423,"renderedWordCount":166,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.474 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 166 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.474 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 166 words and found 166 words. Total words in chunk: 216
2025-03-11 18:17:23.475 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 166 words. Rendering 166 words. Buffer: 50 words
2025-03-11 18:17:23.475 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.523 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043423,"renderedWordCount":166,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.524 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 167 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.524 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 167 words and found 167 words. Total words in chunk: 216
2025-03-11 18:17:23.524 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 167 words. Rendering 167 words. Buffer: 49 words
2025-03-11 18:17:23.525 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.573 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043524,"renderedWordCount":167,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.573 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 167 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.574 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 167 words and found 167 words. Total words in chunk: 216
2025-03-11 18:17:23.574 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 167 words. Rendering 167 words. Buffer: 49 words
2025-03-11 18:17:23.574 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.623 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043524,"renderedWordCount":167,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.624 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 168 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.624 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 168 words and found 168 words. Total words in chunk: 216
2025-03-11 18:17:23.625 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 168 words. Rendering 168 words. Buffer: 48 words
2025-03-11 18:17:23.625 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.673 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043624,"renderedWordCount":168,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.674 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 168 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.674 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 168 words and found 168 words. Total words in chunk: 216
2025-03-11 18:17:23.674 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 168 words. Rendering 168 words. Buffer: 48 words
2025-03-11 18:17:23.675 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.723 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043624,"renderedWordCount":168,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.724 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 169 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.724 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 169 words and found 169 words. Total words in chunk: 216
2025-03-11 18:17:23.724 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 169 words. Rendering 169 words. Buffer: 47 words
2025-03-11 18:17:23.725 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.773 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043723,"renderedWordCount":169,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.774 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 169 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.774 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 169 words and found 169 words. Total words in chunk: 216
2025-03-11 18:17:23.774 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 169 words. Rendering 169 words. Buffer: 47 words
2025-03-11 18:17:23.775 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.823 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043723,"renderedWordCount":169,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.824 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 170 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.824 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 170 words and found 170 words. Total words in chunk: 216
2025-03-11 18:17:23.824 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 170 words. Rendering 170 words. Buffer: 46 words
2025-03-11 18:17:23.825 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.873 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043824,"renderedWordCount":170,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.874 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 170 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.874 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 170 words and found 170 words. Total words in chunk: 216
2025-03-11 18:17:23.874 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 170 words. Rendering 170 words. Buffer: 46 words
2025-03-11 18:17:23.875 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.924 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043824,"renderedWordCount":170,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.925 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 171 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.925 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 171 words and found 171 words. Total words in chunk: 216
2025-03-11 18:17:23.925 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 171 words. Rendering 171 words. Buffer: 45 words
2025-03-11 18:17:23.926 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:23.973 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043924,"renderedWordCount":171,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:23.974 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 171 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:23.974 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 171 words and found 171 words. Total words in chunk: 216
2025-03-11 18:17:23.974 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 171 words. Rendering 171 words. Buffer: 45 words
2025-03-11 18:17:23.974 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.023 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717043924,"renderedWordCount":171,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.024 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 172 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.024 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 172 words and found 172 words. Total words in chunk: 216
2025-03-11 18:17:24.024 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 172 words. Rendering 172 words. Buffer: 44 words
2025-03-11 18:17:24.024 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.073 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044023,"renderedWordCount":172,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.073 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 172 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.074 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 172 words and found 172 words. Total words in chunk: 216
2025-03-11 18:17:24.074 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 172 words. Rendering 172 words. Buffer: 44 words
2025-03-11 18:17:24.074 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.123 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044023,"renderedWordCount":172,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 173 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 173 words and found 173 words. Total words in chunk: 216
2025-03-11 18:17:24.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 173 words. Rendering 173 words. Buffer: 43 words
2025-03-11 18:17:24.124 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.173 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044123,"renderedWordCount":173,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.174 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 173 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.175 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 173 words and found 173 words. Total words in chunk: 216
2025-03-11 18:17:24.175 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 173 words. Rendering 173 words. Buffer: 43 words
2025-03-11 18:17:24.175 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.223 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044123,"renderedWordCount":173,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.224 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 174 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.224 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 174 words and found 174 words. Total words in chunk: 216
2025-03-11 18:17:24.225 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 174 words. Rendering 174 words. Buffer: 42 words
2025-03-11 18:17:24.225 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.273 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044223,"renderedWordCount":174,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.273 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 174 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.273 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 174 words and found 174 words. Total words in chunk: 216
2025-03-11 18:17:24.274 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 174 words. Rendering 174 words. Buffer: 42 words
2025-03-11 18:17:24.274 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.323 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044223,"renderedWordCount":174,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.323 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 175 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.324 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 175 words and found 175 words. Total words in chunk: 216
2025-03-11 18:17:24.324 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 175 words. Rendering 175 words. Buffer: 41 words
2025-03-11 18:17:24.324 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.375 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044323,"renderedWordCount":175,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.376 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 175 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.376 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 175 words and found 175 words. Total words in chunk: 216
2025-03-11 18:17:24.376 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 175 words. Rendering 175 words. Buffer: 41 words
2025-03-11 18:17:24.376 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.423 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044323,"renderedWordCount":175,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.423 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 176 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.423 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 176 words and found 176 words. Total words in chunk: 216
2025-03-11 18:17:24.423 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 176 words. Rendering 176 words. Buffer: 40 words
2025-03-11 18:17:24.424 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.473 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044423,"renderedWordCount":176,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.473 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 176 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.474 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 176 words and found 176 words. Total words in chunk: 216
2025-03-11 18:17:24.474 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 176 words. Rendering 176 words. Buffer: 40 words
2025-03-11 18:17:24.474 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.523 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044423,"renderedWordCount":176,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.523 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 177 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.524 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 177 words and found 177 words. Total words in chunk: 216
2025-03-11 18:17:24.524 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 177 words. Rendering 177 words. Buffer: 39 words
2025-03-11 18:17:24.524 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.573 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044523,"renderedWordCount":177,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.574 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 177 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.574 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 177 words and found 177 words. Total words in chunk: 216
2025-03-11 18:17:24.574 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 177 words. Rendering 177 words. Buffer: 39 words
2025-03-11 18:17:24.574 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.623 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044523,"renderedWordCount":177,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.624 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 178 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.625 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 178 words and found 178 words. Total words in chunk: 216
2025-03-11 18:17:24.625 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 178 words. Rendering 178 words. Buffer: 38 words
2025-03-11 18:17:24.625 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.673 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044623,"renderedWordCount":178,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.674 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 178 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.674 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 178 words and found 178 words. Total words in chunk: 216
2025-03-11 18:17:24.674 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 178 words. Rendering 178 words. Buffer: 38 words
2025-03-11 18:17:24.675 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.723 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044623,"renderedWordCount":178,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology',","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.724 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 179 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.725 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 179 words and found 179 words. Total words in chunk: 216
2025-03-11 18:17:24.725 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 179 words. Rendering 179 words. Buffer: 37 words
2025-03-11 18:17:24.725 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.773 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044723,"renderedWordCount":179,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.774 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 179 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.774 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 179 words and found 179 words. Total words in chunk: 216
2025-03-11 18:17:24.774 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 179 words. Rendering 179 words. Buffer: 37 words
2025-03-11 18:17:24.775 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.823 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044723,"renderedWordCount":179,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.824 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 180 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.824 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 180 words and found 180 words. Total words in chunk: 216
2025-03-11 18:17:24.825 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 180 words. Rendering 180 words. Buffer: 36 words
2025-03-11 18:17:24.826 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.873 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044823,"renderedWordCount":180,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.874 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 180 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.874 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 180 words and found 180 words. Total words in chunk: 216
2025-03-11 18:17:24.874 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 180 words. Rendering 180 words. Buffer: 36 words
2025-03-11 18:17:24.875 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.924 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044823,"renderedWordCount":180,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.924 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 181 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.924 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 181 words and found 181 words. Total words in chunk: 216
2025-03-11 18:17:24.925 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 181 words. Rendering 181 words. Buffer: 35 words
2025-03-11 18:17:24.925 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:24.973 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044924,"renderedWordCount":181,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n #","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:24.974 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 181 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:24.974 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 181 words and found 181 words. Total words in chunk: 216
2025-03-11 18:17:24.974 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 181 words. Rendering 181 words. Buffer: 35 words
2025-03-11 18:17:24.975 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.023 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717044924,"renderedWordCount":181,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n #","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.024 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 182 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.024 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 182 words and found 182 words. Total words in chunk: 216
2025-03-11 18:17:25.024 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 182 words. Rendering 182 words. Buffer: 34 words
2025-03-11 18:17:25.025 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.073 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045023,"renderedWordCount":182,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.074 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 182 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.074 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 182 words and found 182 words. Total words in chunk: 216
2025-03-11 18:17:25.074 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 182 words. Rendering 182 words. Buffer: 34 words
2025-03-11 18:17:25.075 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.123 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045023,"renderedWordCount":182,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 183 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 183 words and found 183 words. Total words in chunk: 216
2025-03-11 18:17:25.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 183 words. Rendering 183 words. Buffer: 33 words
2025-03-11 18:17:25.125 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.173 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045123,"renderedWordCount":183,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.173 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 183 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.174 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 183 words and found 183 words. Total words in chunk: 216
2025-03-11 18:17:25.174 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 183 words. Rendering 183 words. Buffer: 33 words
2025-03-11 18:17:25.174 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.223 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045123,"renderedWordCount":183,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.224 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 184 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.224 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 184 words and found 184 words. Total words in chunk: 216
2025-03-11 18:17:25.224 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 184 words. Rendering 184 words. Buffer: 32 words
2025-03-11 18:17:25.225 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.273 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045223,"renderedWordCount":184,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.274 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 184 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.274 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 184 words and found 184 words. Total words in chunk: 216
2025-03-11 18:17:25.274 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 184 words. Rendering 184 words. Buffer: 32 words
2025-03-11 18:17:25.274 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.323 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045223,"renderedWordCount":184,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.324 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 185 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.324 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 185 words and found 185 words. Total words in chunk: 216
2025-03-11 18:17:25.325 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 185 words. Rendering 185 words. Buffer: 31 words
2025-03-11 18:17:25.325 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.373 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045324,"renderedWordCount":185,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.373 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 185 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.374 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 185 words and found 185 words. Total words in chunk: 216
2025-03-11 18:17:25.374 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 185 words. Rendering 185 words. Buffer: 31 words
2025-03-11 18:17:25.374 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.424 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045324,"renderedWordCount":185,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.424 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 186 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.425 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 186 words and found 186 words. Total words in chunk: 216
2025-03-11 18:17:25.425 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 186 words. Rendering 186 words. Buffer: 30 words
2025-03-11 18:17:25.425 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.473 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045424,"renderedWordCount":186,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.474 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 186 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.474 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 186 words and found 186 words. Total words in chunk: 216
2025-03-11 18:17:25.475 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 186 words. Rendering 186 words. Buffer: 30 words
2025-03-11 18:17:25.475 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.523 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045424,"renderedWordCount":186,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.523 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 187 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.524 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 187 words and found 187 words. Total words in chunk: 216
2025-03-11 18:17:25.524 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 187 words. Rendering 187 words. Buffer: 29 words
2025-03-11 18:17:25.524 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.573 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045523,"renderedWordCount":187,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.573 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 187 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.574 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 187 words and found 187 words. Total words in chunk: 216
2025-03-11 18:17:25.574 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 187 words. Rendering 187 words. Buffer: 29 words
2025-03-11 18:17:25.574 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.623 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045523,"renderedWordCount":187,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.624 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 188 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.626 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 188 words and found 188 words. Total words in chunk: 216
2025-03-11 18:17:25.627 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 188 words. Rendering 188 words. Buffer: 28 words
2025-03-11 18:17:25.627 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.673 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045623,"renderedWordCount":188,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.674 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 188 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.674 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 188 words and found 188 words. Total words in chunk: 216
2025-03-11 18:17:25.675 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 188 words. Rendering 188 words. Buffer: 28 words
2025-03-11 18:17:25.675 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.723 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045623,"renderedWordCount":188,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.724 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 189 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.724 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 189 words and found 189 words. Total words in chunk: 216
2025-03-11 18:17:25.724 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 189 words. Rendering 189 words. Buffer: 27 words
2025-03-11 18:17:25.724 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.773 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045723,"renderedWordCount":189,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes =","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.774 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 189 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.774 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 189 words and found 189 words. Total words in chunk: 216
2025-03-11 18:17:25.774 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 189 words. Rendering 189 words. Buffer: 27 words
2025-03-11 18:17:25.775 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.823 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045723,"renderedWordCount":189,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes =","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.824 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 190 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.824 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 190 words and found 190 words. Total words in chunk: 216
2025-03-11 18:17:25.824 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 190 words. Rendering 190 words. Buffer: 26 words
2025-03-11 18:17:25.825 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.873 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045823,"renderedWordCount":190,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1,","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.874 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 190 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.874 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 190 words and found 190 words. Total words in chunk: 216
2025-03-11 18:17:25.874 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 190 words. Rendering 190 words. Buffer: 26 words
2025-03-11 18:17:25.875 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.923 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045823,"renderedWordCount":190,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1,","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.923 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 191 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.924 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 191 words and found 191 words. Total words in chunk: 216
2025-03-11 18:17:25.924 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 191 words. Rendering 191 words. Buffer: 25 words
2025-03-11 18:17:25.924 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:25.973 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045923,"renderedWordCount":191,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2,","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:25.974 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 191 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:25.974 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 191 words and found 191 words. Total words in chunk: 216
2025-03-11 18:17:25.974 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 191 words. Rendering 191 words. Buffer: 25 words
2025-03-11 18:17:25.975 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.023 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717045923,"renderedWordCount":191,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2,","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.024 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 192 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.024 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 192 words and found 192 words. Total words in chunk: 216
2025-03-11 18:17:26.024 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 192 words. Rendering 192 words. Buffer: 24 words
2025-03-11 18:17:26.025 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.073 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046023,"renderedWordCount":192,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4,","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.073 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 192 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.073 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 192 words and found 192 words. Total words in chunk: 216
2025-03-11 18:17:26.074 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 192 words. Rendering 192 words. Buffer: 24 words
2025-03-11 18:17:26.074 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.123 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046023,"renderedWordCount":192,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4,","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 193 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 193 words and found 193 words. Total words in chunk: 216
2025-03-11 18:17:26.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 193 words. Rendering 193 words. Buffer: 23 words
2025-03-11 18:17:26.125 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.173 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046123,"renderedWordCount":193,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8]","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.173 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 193 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.174 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 193 words and found 193 words. Total words in chunk: 216
2025-03-11 18:17:26.174 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 193 words. Rendering 193 words. Buffer: 23 words
2025-03-11 18:17:26.174 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.223 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046123,"renderedWordCount":193,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8]","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.224 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 194 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.224 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 194 words and found 194 words. Total words in chunk: 216
2025-03-11 18:17:26.224 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 194 words. Rendering 194 words. Buffer: 22 words
2025-03-11 18:17:26.224 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.273 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046223,"renderedWordCount":194,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] #","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.273 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 194 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.274 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 194 words and found 194 words. Total words in chunk: 216
2025-03-11 18:17:26.274 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 194 words. Rendering 194 words. Buffer: 22 words
2025-03-11 18:17:26.274 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.327 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046223,"renderedWordCount":194,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] #","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.327 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 195 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.328 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 195 words and found 195 words. Total words in chunk: 216
2025-03-11 18:17:26.328 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 195 words. Rendering 195 words. Buffer: 21 words
2025-03-11 18:17:26.328 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.373 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046328,"renderedWordCount":195,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.373 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 195 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.373 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 195 words and found 195 words. Total words in chunk: 216
2025-03-11 18:17:26.374 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 195 words. Rendering 195 words. Buffer: 21 words
2025-03-11 18:17:26.374 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.423 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046328,"renderedWordCount":195,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.423 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 196 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.424 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 196 words and found 196 words. Total words in chunk: 216
2025-03-11 18:17:26.425 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 196 words. Rendering 196 words. Buffer: 20 words
2025-03-11 18:17:26.425 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.473 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046423,"renderedWordCount":196,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.474 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 196 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.474 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 196 words and found 196 words. Total words in chunk: 216
2025-03-11 18:17:26.474 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 196 words. Rendering 196 words. Buffer: 20 words
2025-03-11 18:17:26.474 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.523 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046423,"renderedWordCount":196,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.524 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 197 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.524 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 197 words and found 197 words. Total words in chunk: 216
2025-03-11 18:17:26.524 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 197 words. Rendering 197 words. Buffer: 19 words
2025-03-11 18:17:26.524 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.573 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046523,"renderedWordCount":197,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.574 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 197 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.574 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 197 words and found 197 words. Total words in chunk: 216
2025-03-11 18:17:26.575 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 197 words. Rendering 197 words. Buffer: 19 words
2025-03-11 18:17:26.575 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.623 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046523,"renderedWordCount":197,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.623 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 198 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.623 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 198 words and found 198 words. Total words in chunk: 216
2025-03-11 18:17:26.623 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 198 words. Rendering 198 words. Buffer: 18 words
2025-03-11 18:17:26.624 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.673 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046623,"renderedWordCount":198,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.673 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 198 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.673 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 198 words and found 198 words. Total words in chunk: 216
2025-03-11 18:17:26.673 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 198 words. Rendering 198 words. Buffer: 18 words
2025-03-11 18:17:26.674 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.723 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046623,"renderedWordCount":198,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.723 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 199 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.724 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 199 words and found 199 words. Total words in chunk: 216
2025-03-11 18:17:26.724 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 199 words. Rendering 199 words. Buffer: 17 words
2025-03-11 18:17:26.724 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.773 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046724,"renderedWordCount":199,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.774 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 199 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.774 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 199 words and found 199 words. Total words in chunk: 216
2025-03-11 18:17:26.774 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 199 words. Rendering 199 words. Buffer: 17 words
2025-03-11 18:17:26.775 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.823 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046724,"renderedWordCount":199,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.824 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 200 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.824 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 200 words and found 200 words. Total words in chunk: 216
2025-03-11 18:17:26.824 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 200 words. Rendering 200 words. Buffer: 16 words
2025-03-11 18:17:26.825 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.873 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046823,"renderedWordCount":200,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.874 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 200 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.874 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 200 words and found 200 words. Total words in chunk: 216
2025-03-11 18:17:26.874 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 200 words. Rendering 200 words. Buffer: 16 words
2025-03-11 18:17:26.875 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.925 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046823,"renderedWordCount":200,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.926 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 201 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.926 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 201 words and found 201 words. Total words in chunk: 216
2025-03-11 18:17:26.926 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 201 words. Rendering 201 words. Buffer: 15 words
2025-03-11 18:17:26.926 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:26.973 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046926,"renderedWordCount":201,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:26.974 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 201 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:26.974 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 201 words and found 201 words. Total words in chunk: 216
2025-03-11 18:17:26.974 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 201 words. Rendering 201 words. Buffer: 15 words
2025-03-11 18:17:26.974 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:27.023 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717046926,"renderedWordCount":201,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:27.024 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 202 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:27.024 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 202 words and found 202 words. Total words in chunk: 216
2025-03-11 18:17:27.024 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 202 words. Rendering 202 words. Buffer: 14 words
2025-03-11 18:17:27.025 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:27.073 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717047023,"renderedWordCount":202,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results =","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:27.074 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 202 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:27.074 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 202 words and found 202 words. Total words in chunk: 216
2025-03-11 18:17:27.074 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 202 words. Rendering 202 words. Buffer: 14 words
2025-03-11 18:17:27.074 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:27.123 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717047023,"renderedWordCount":202,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results =","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:27.123 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 203 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:27.123 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 203 words and found 203 words. Total words in chunk: 216
2025-03-11 18:17:27.124 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 203 words. Rendering 203 words. Buffer: 13 words
2025-03-11 18:17:27.124 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:27.176 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717047123,"renderedWordCount":203,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results = benchmark_batch_sizes(topic_pairs,","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:27.176 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 203 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:27.176 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 203 words and found 203 words. Total words in chunk: 216
2025-03-11 18:17:27.176 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 203 words. Rendering 203 words. Buffer: 13 words
2025-03-11 18:17:27.176 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:27.223 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717047123,"renderedWordCount":203,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results = benchmark_batch_sizes(topic_pairs,","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:27.224 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 204 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:27.224 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 204 words and found 204 words. Total words in chunk: 216
2025-03-11 18:17:27.224 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 204 words. Rendering 204 words. Buffer: 12 words
2025-03-11 18:17:27.225 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:27.273 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717047223,"renderedWordCount":204,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results = benchmark_batch_sizes(topic_pairs, llm_choice,","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:27.274 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 204 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:27.274 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 204 words and found 204 words. Total words in chunk: 216
2025-03-11 18:17:27.275 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 204 words. Rendering 204 words. Buffer: 12 words
2025-03-11 18:17:27.275 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:27.324 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717047223,"renderedWordCount":204,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results = benchmark_batch_sizes(topic_pairs, llm_choice,","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:27.324 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 205 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:27.324 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 205 words and found 205 words. Total words in chunk: 216
2025-03-11 18:17:27.324 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 205 words. Rendering 205 words. Buffer: 11 words
2025-03-11 18:17:27.325 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:27.373 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717047324,"renderedWordCount":205,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results = benchmark_batch_sizes(topic_pairs, llm_choice, batch_sizes)","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:27.374 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 205 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:27.374 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 205 words and found 205 words. Total words in chunk: 216
2025-03-11 18:17:27.374 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 205 words. Rendering 205 words. Buffer: 11 words
2025-03-11 18:17:27.375 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:27.423 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717047324,"renderedWordCount":205,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results = benchmark_batch_sizes(topic_pairs, llm_choice, batch_sizes)","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:27.423 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 206 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:27.424 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 206 words and found 206 words. Total words in chunk: 216
2025-03-11 18:17:27.424 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 206 words. Rendering 206 words. Buffer: 10 words
2025-03-11 18:17:27.424 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:27.473 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717047423,"renderedWordCount":206,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results = benchmark_batch_sizes(topic_pairs, llm_choice, batch_sizes)\n \n #","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:27.474 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 206 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:27.474 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 206 words and found 206 words. Total words in chunk: 216
2025-03-11 18:17:27.474 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 206 words. Rendering 206 words. Buffer: 10 words
2025-03-11 18:17:27.474 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:27.523 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717047423,"renderedWordCount":206,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results = benchmark_batch_sizes(topic_pairs, llm_choice, batch_sizes)\n \n #","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:27.523 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 207 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:27.523 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 207 words and found 207 words. Total words in chunk: 216
2025-03-11 18:17:27.524 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 207 words. Rendering 207 words. Buffer: 9 words
2025-03-11 18:17:27.524 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:27.558 [info] [GitHub Copilot Chat] srequest done: chatRequestId: [REDACTED], speculationRequestId: [REDACTED]
2025-03-11 18:17:27.559 [trace] [Window] ChatService#sendRequest: Provider returned progress: {"kind":"textEdit","uri":{"$mid":1,"fsPath":"/home/user/dev/testing/test.py","external":"file:///home/user/dev/testing/test.py","path":"/home/user/dev/testing/test.py","scheme":"file"},"edits":[{"text":"","range":{"startLineNumber":1,"startColumn":1,"endLineNumber":73,"endColumn":1}}]}
2025-03-11 18:17:27.560 [trace] [Window] ChatResponseViewModel#onDidChange: Update- no new words
2025-03-11 18:17:27.564 [trace] [Window] [History stack global-default]: notifyNavigation() (editor: file:///home/user/dev/testing/test.py, event: edit)
2025-03-11 18:17:27.564 [trace] [Window] [History stack global-default]: notifyNavigation() not ignoring (editor: file:///home/user/dev/testing/test.py, event: edit)
2025-03-11 18:17:27.564 [trace] [Window] [History stack global-default]: onSelectionAwareEditorNavigation() (editor: file:///home/user/dev/testing/test.py, event: edit)
2025-03-11 18:17:27.565 [trace] [Window] [History stack global-default]: add() (editor: file:///home/user/dev/testing/test.py, event: <none>)
2025-03-11 18:17:27.565 [trace] [Window] [History stack global-default]: index: 21, navigating: false
2025-03-11 18:17:27.565 [trace] [Window] [History stack edits-default]: notifyNavigation() (editor: file:///home/user/dev/testing/test.py, event: edit)
2025-03-11 18:17:27.565 [trace] [Window] [History stack edits-default]: notifyNavigation() not ignoring (editor: file:///home/user/dev/testing/test.py, event: edit)
2025-03-11 18:17:27.565 [trace] [Window] [History stack edits-default]: onSelectionAwareEditorNavigation() (editor: file:///home/user/dev/testing/test.py, event: edit)
2025-03-11 18:17:27.565 [trace] [Window] [History stack edits-default]: add() (editor: file:///home/user/dev/testing/test.py, event: <none>)
2025-03-11 18:17:27.565 [trace] [Window] [History stack edits-default]: index: 11, navigating: false
2025-03-11 18:17:27.566 [trace] [Window] [History stack global-default]: notifyNavigation() (editor: file:///home/user/dev/testing/test.py, event: user)
2025-03-11 18:17:27.566 [trace] [Window] [History stack global-default]: notifyNavigation() not ignoring (editor: file:///home/user/dev/testing/test.py, event: user)
2025-03-11 18:17:27.566 [trace] [Window] [History stack global-default]: onSelectionAwareEditorNavigation() (editor: file:///home/user/dev/testing/test.py, event: user)
2025-03-11 18:17:27.566 [trace] [Window] [History stack global-default]: replace() (editor: file:///home/user/dev/testing/test.py, event: <none>)
2025-03-11 18:17:27.566 [trace] [Window] [History stack global-default]: index: 21, navigating: false
2025-03-11 18:17:27.566 [trace] [Window] [text file model] onModelContentChanged() - enter file:///home/user/dev/testing/test.py
2025-03-11 18:17:27.567 [trace] [Window] [text file model] onModelContentChanged() - new versionId 21 file:///home/user/dev/testing/test.py
2025-03-11 18:17:27.567 [trace] [Window] [text file model] onModelContentChanged() - model content changed and marked as dirty file:///home/user/dev/testing/test.py
2025-03-11 18:17:27.567 [trace] [Window] [backup tracker] scheduling backup file:///home/user/dev/testing/test.py
2025-03-11 18:17:27.570 [trace] [Window] [backup tracker] clearing pending backup creation file:///home/user/dev/testing/test.py
2025-03-11 18:17:27.571 [trace] [Window] [backup tracker] scheduling backup file:///home/user/dev/testing/test.py
2025-03-11 18:17:27.571 [trace] [Extension Host] ExtHostCommands#executeCommand inlineChat.hideHint
2025-03-11 18:17:27.572 [trace] [Window] ChatListItemRenderer#disposeElement: Disposing element, index=1
2025-03-11 18:17:27.572 [trace] [Window] ChatListDelegate#getHeight: response, height=133
2025-03-11 18:17:27.572 [trace] [Window] ChatListItemRenderer#renderElement: response, index=1
2025-03-11 18:17:27.572 [trace] [Window] ChatListItemRenderer#renderElement: start progressive render, index=1
2025-03-11 18:17:27.573 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717047523,"renderedWordCount":207,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results = benchmark_batch_sizes(topic_pairs, llm_choice, batch_sizes)\n \n # Plot","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:27.573 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 207 at 13.783421606789611 words/s, counting...
2025-03-11 18:17:27.573 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 207 words and found 207 words. Total words in chunk: 216
2025-03-11 18:17:27.573 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 207 words. Rendering 207 words. Buffer: 9 words
2025-03-11 18:17:27.573 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:27.601 [trace] [Window] ChatService#sendRequest: Provider returned progress: {"kind":"textEdit","uri":{"$mid":1,"fsPath":"/home/user/dev/testing/test.py","external":"file:///home/user/dev/testing/test.py","path":"/home/user/dev/testing/test.py","scheme":"file"},"edits":[],"done":true}
2025-03-11 18:17:27.601 [trace] [Window] ChatResponseViewModel#onDidChange: Update- no new words
2025-03-11 18:17:27.602 [trace] [Window] ChatResponseViewModel#onDidChange: Update- no new words
2025-03-11 18:17:27.603 [trace] [Window] ChatService#sendRequest: Provider returned response for session 817a69f6-73b9-417e-b618-7dfcda8614eb
2025-03-11 18:17:27.603 [trace] [Window] ChatResponseViewModel#onDidChange: Update- no new words
2025-03-11 18:17:27.605 [trace] [Window] ChatResponseViewModel#onDidChange: Update- no new words
2025-03-11 18:17:27.616 [trace] [Window] CommandService#executeCommand inlineChat.hideHint
2025-03-11 18:17:27.616 [trace] [Extension Host] [ms-python.python] INVOKE provider 'n=>n.provideInlayHints(E.revive(r),i,s)'
2025-03-11 18:17:27.617 [debug] [Window] [EditorAction2] NOT running command because its precondition is FALSE inlineChat.hideHint inlineChatShowingHint
2025-03-11 18:17:27.618 [trace] [Window] ChatListItemRenderer#disposeElement: Disposing element, index=1
2025-03-11 18:17:27.618 [trace] [Window] ChatListItemRenderer#disposeElement: Disposing element, index=0
2025-03-11 18:17:27.618 [trace] [Window] ChatListDelegate#getHeight: request, height=112
2025-03-11 18:17:27.618 [trace] [Window] ChatListDelegate#getHeight: response, height=133
2025-03-11 18:17:27.618 [trace] [Extension Host] [InlayHints] NO inlay hints from 'ms-python.python' for range {"startLineNumber":1,"startColumn":1,"endLineNumber":1,"endColumn":1}
2025-03-11 18:17:27.618 [trace] [Extension Host] [ms-python.python] provider DONE after 2ms
2025-03-11 18:17:27.619 [trace] [Window] ChatListItemRenderer#renderElement: response, index=1
2025-03-11 18:17:27.619 [trace] [Window] ChatListItemRenderer#renderElement: start progressive render, index=1
2025-03-11 18:17:27.619 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717047523,"renderedWordCount":207,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results = benchmark_batch_sizes(topic_pairs, llm_choice, batch_sizes)\n \n # Plot","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:27.620 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 214 at 80 words/s, counting...
2025-03-11 18:17:27.620 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 214 words and found 214 words. Total words in chunk: 216
2025-03-11 18:17:27.620 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 214 words. Rendering 214 words. Buffer: 2 words
2025-03-11 18:17:27.620 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: not rendering any new content this tick, but more available
2025-03-11 18:17:27.620 [trace] [Window] ChatListItemRenderer#renderElement: request, index=0
2025-03-11 18:17:27.647 [trace] [Window] [DEBOUNCE: InlayHint] for file:///home/user/dev/testing/test.py is 25ms
2025-03-11 18:17:27.689 [trace] [Extension Host] ExtHostCommands#executeCommand setContext
2025-03-11 18:17:27.689 [trace] [Extension Host] ExtHostCommands#executeCommand _setContext
2025-03-11 18:17:27.692 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717047619,"renderedWordCount":214,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results = benchmark_batch_sizes(topic_pairs, llm_choice, batch_sizes)\n \n # Plot the results\n plot_results(results)\n\nif __name__ == \"__main__\":","supportThemeIcons":false,"supportHtml":false}}]}
2025-03-11 18:17:27.695 [trace] [Extension Host] ExtHostCommands#executeCommand setContext
2025-03-11 18:17:27.695 [trace] [Extension Host] ExtHostCommands#executeCommand _setContext
2025-03-11 18:17:27.696 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 219 at 80 words/s, counting...
2025-03-11 18:17:27.696 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 219 words and found 216 words. Total words in chunk: 216
2025-03-11 18:17:27.696 [trace] [Extension Host] ExtHostCommands#executeCommand setContext
2025-03-11 18:17:27.696 [trace] [Extension Host] ExtHostCommands#executeCommand _setContext
2025-03-11 18:17:27.697 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 219 words. Rendering 216 words. Buffer: 0 words
2025-03-11 18:17:27.697 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: doing progressive render, 2 parts to render
2025-03-11 18:17:27.698 [trace] [Extension Host] ExtHostCommands#executeCommand setContext
2025-03-11 18:17:27.698 [trace] [Extension Host] ExtHostCommands#executeCommand _setContext
2025-03-11 18:17:27.698 [trace] [Extension Host] ExtHostCommands#executeCommand setContext
2025-03-11 18:17:27.698 [trace] [Extension Host] ExtHostCommands#executeCommand _setContext
2025-03-11 18:17:27.707 [trace] [Extension Host] [ms-python.python] INVOKE provider 'n=>n.provideInlayHints(E.revive(r),i,s)'
2025-03-11 18:17:27.709 [trace] [Window] CommandService#executeCommand _setContext
2025-03-11 18:17:27.709 [trace] [Window] CommandService#executeCommand _setContext
2025-03-11 18:17:27.709 [trace] [Extension Host] [InlayHints] NO inlay hints from 'ms-python.python' for range {"startLineNumber":1,"startColumn":1,"endLineNumber":1,"endColumn":1}
2025-03-11 18:17:27.709 [trace] [Extension Host] [ms-python.python] provider DONE after 2ms
2025-03-11 18:17:27.710 [trace] [Window] CommandService#executeCommand _setContext
2025-03-11 18:17:27.712 [trace] [Window] CommandService#executeCommand _setContext
2025-03-11 18:17:27.712 [trace] [Window] CommandService#executeCommand _setContext
2025-03-11 18:17:27.718 [trace] [Window] [DEBOUNCE: InlayHint] for file:///home/user/dev/testing/test.py is 25ms
2025-03-11 18:17:27.732 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: START progressive render, index=1, renderData={"lastRenderTime":1741717047692,"renderedWordCount":216,"renderedParts":[{"kind":"markdownContent","content":{"value":"### [test.py](file:///home/user/dev/testing/test.py)\n\nAdd functionality to compare throughput for different batch sizes and generate graphs.\n\n````python\n<vscode_codeblock_uri>file:///home/user/dev/testing/test.py</vscode_codeblock_uri>import time\nimport warnings\nimport os\nfrom typing import List\nimport matplotlib.pyplot as plt # Add matplotlib for graphing\n\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\nfrom outlines import models, generate, samplers\n\n# ...existing code...\n\ndef benchmark_batch_sizes(topic_pairs: List[tuple], llm_choice, batch_sizes: List[int]):\n \"\"\"Benchmark performance for different batch sizes.\"\"\"\n results = {}\n for batch_size in batch_sizes:\n print(f\"\\nBenchmarking batch size: {batch_size}\")\n start_time = time.time()\n for i in range(0, len(topic_pairs), batch_size):\n batch = topic_pairs[i:i + batch_size]\n batch_score_topics(batch, llm_choice)\n total_time = time.time() - start_time\n throughput = len(topic_pairs) / total_time\n results[batch_size] = throughput\n print(f\"Batch size: {batch_size}, Throughput: {throughput:.2f} pairs/second\")\n return results\n\ndef plot_results(results):\n \"\"\"Plot the benchmarking results.\"\"\"\n batch_sizes = list(results.keys())\n throughputs = list(results.values())\n plt.figure(figsize=(10, 6))\n plt.plot(batch_sizes, throughputs, marker='o')\n plt.xlabel('Batch Size')\n plt.ylabel('Throughput (pairs/second)')\n plt.title('Throughput vs Batch Size')\n plt.grid(True)\n plt.show()\n\ndef main():\n # Load the model with generate.choice configured\n llm_choice = load_model()\n \n # List of topic pairs to score\n topic_pairs = [\n ('artificial intelligence', 'machine learning'),\n ('quantum physics', 'astronomy'),\n ('poetry', 'literature'),\n ('basketball', 'soccer'),\n ('biology', 'chemistry'),\n ('music', 'painting'),\n ('cooking', 'baking'),\n ('psychology', 'neuroscience')\n ]\n \n # Benchmark performance for different batch sizes\n batch_sizes = [1, 2, 4, 8] # Define different batch sizes to test\n results = benchmark_batch_sizes(topic_pairs, llm_choice, batch_sizes)\n \n # Plot the results\n plot_results(results)\n\nif __name__ == \"__main__\":\n main()\n````","supportThemeIcons":false,"supportHtml":false}},{"kind":"textEditGroup","uri":{"$mid":1,"fsPath":"/home/user/dev/testing/test.py","external":"file:///home/user/dev/testing/test.py","path":"/home/user/dev/testing/test.py","scheme":"file"},"edits":[[],[{"text":"","range":{"startLineNumber":1,"startColumn":1,"endLineNumber":73,"endColumn":1}}],[]],"done":true}]}
2025-03-11 18:17:27.732 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 219 at 80 words/s, counting...
2025-03-11 18:17:27.732 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Chunk 0: Want to render 219 words and found 216 words. Total words in chunk: 216
2025-03-11 18:17:27.733 [trace] [Window] ChatListItemRenderer#getNextProgressiveRenderContent: Want to render 219 words. Rendering 216 words. Buffer: 0 words
2025-03-11 18:17:27.733 [trace] [Window] ChatListItemRenderer#doNextProgressiveRender: END progressive render, index=1 and clearing renderData, response is complete
2025-03-11 18:17:27.765 [trace] [Extension Host] [ms-python.python] INVOKE provider 'n=>n.provideFoldingRanges(E.revive(r),i,s)'
2025-03-11 18:17:27.768 [trace] [Extension Host] [ms-python.python] provider DONE after 3ms
2025-03-11 18:17:27.769 [trace] [Window] [DEBOUNCE: Folding] for file:///home/user/dev/testing/test.py is 200ms
2025-03-11 18:17:27.815 [trace] [Extension Host] [vscode.merge-conflict] INVOKE provider 's=>s.provideCodeLenses(E.revive(r),i)'
2025-03-11 18:17:27.817 [trace] [Extension Host] ExtHostCommands#executeCommand setContext
2025-03-11 18:17:27.817 [trace] [Extension Host] ExtHostCommands#executeCommand _setContext
2025-03-11 18:17:27.817 [trace] [Extension Host] [vscode.merge-conflict] provider DONE after 2ms
2025-03-11 18:17:27.818 [trace] [Window] CommandService#executeCommand _setContext
2025-03-11 18:17:27.818 [trace] [Window] [DEBOUNCE: CodeLensProvide] for file:///home/user/dev/testing/test.py is 250ms
2025-03-11 18:17:27.829 [trace] [Extension Host] [DiagnosticCollection] change many (extension, owner, uris) ms-python.python python [[{"$mid":1,"external":"file:///home/user/dev/testing/test.py","path":"/home/user/dev/testing/test.py","scheme":"file"},[]]]
2025-03-11 18:17:27.867 [trace] [Extension Host] [ms-python.python] INVOKE provider 'n=>n.provideDocumentSemanticTokens(E.revive(r),i,s)'
2025-03-11 18:17:27.879 [trace] [Extension Host] ExtHostCommands#executeCommand setContext
2025-03-11 18:17:27.880 [trace] [Extension Host] ExtHostCommands#executeCommand _setContext
2025-03-11 18:17:27.881 [trace] [Window] CommandService#executeCommand _setContext
2025-03-11 18:17:27.912 [trace] [Extension Host] [ms-python.python] INVOKE provider 's=>s.provideDocumentSymbols(E.revive(r),i)'
2025-03-11 18:17:27.915 [trace] [Extension Host] [ms-python.python] INVOKE provider 's=>s.provideDocumentSymbols(E.revive(r),i)'
2025-03-11 18:17:28.298 [trace] [Extension Host] [DiagnosticCollection] change many (extension, owner, uris) ms-python.python python [[{"$mid":1,"external":"vscode-chat-code-block://817a69f6-73b9-417e-b618-7dfcda8614eb/response_11ddebd7-7ea8-4882-9d2d-09a47469e5c3/0#%7B%22references%22%3A%5B%5D%7D","path":"/response_11ddebd7-7ea8-4882-9d2d-09a47469e5c3/0","scheme":"vscode-chat-code-block","authority":"817a69f6-73b9-417e-b618-7dfcda8614eb","fragment":"{\"references\":[]}"},[{"startLineNumber":22,"startColumn":13,"endLineNumber":22,"endColumn":31,"message":"\"batch_score_topics\" is not defined","source":"Pylance","code":{"value":"reportUndefinedVariable","target":{"$mid":1,"path":"/microsoft/pyright/blob/main/docs/configuration.md","scheme":"https","authority":"github.com","fragment":"reportUndefinedVariable"}},"severity":4,"modelVersionId":3},{"startLineNumber":43,"startColumn":18,"endLineNumber":43,"endColumn":28,"message":"\"load_model\" is not defined","source":"Pylance","code":{"value":"reportUndefinedVariable","target":{"$mid":1,"path":"/microsoft/pyright/blob/main/docs/configuration.md","scheme":"https","authority":"github.com","fragment":"reportUndefinedVariable"}},"severity":4,"modelVersionId":3},{"startLineNumber":10,"startColumn":22,"endLineNumber":10,"endColumn":28,"message":"\"models\" is not accessed","source":"Pylance","severity":1,"tags":[1],"modelVersionId":3},{"startLineNumber":10,"startColumn":30,"endLineNumber":10,"endColumn":38,"message":"\"generate\" is not accessed","source":"Pylance","severity":1,"tags":[1],"modelVersionId":3},{"startLineNumber":10,"startColumn":40,"endLineNumber":10,"endColumn":48,"message":"\"samplers\" is not accessed","source":"Pylance","severity":1,"tags":[1],"modelVersionId":3}]]]
2025-03-11 18:17:28.300 [trace] [Extension Host] [ms-python.python] provider DONE after 388ms
2025-03-11 18:17:28.300 [trace] [Extension Host] [ms-python.python] provider DONE after 385ms
2025-03-11 18:17:28.301 [trace] [Extension Host] [ms-python.python] INVOKE provider 'n=>n.provideFoldingRanges(E.revive(r),i,s)'
2025-03-11 18:17:28.301 [trace] [Window] [DEBOUNCE: DocumentSymbols] for file:///home/user/dev/testing/test.py is 350ms
2025-03-11 18:17:28.301 [trace] [Extension Host] [ms-python.python] provider DONE after 434ms
2025-03-11 18:17:28.302 [trace] [Extension Host] [ms-python.python] provider DONE after 1ms
2025-03-11 18:17:28.302 [trace] [Extension Host] [ms-python.python] INVOKE provider 'i=>i.releaseDocumentSemanticColoring(r)'
2025-03-11 18:17:28.302 [trace] [Window] [DEBOUNCE: DocumentSemanticTokens] for file:///home/user/dev/testing/test.py is 300ms
2025-03-11 18:17:28.302 [trace] [Extension Host] [ms-python.python] provider DONE after 0ms
2025-03-11 18:17:28.348 [trace] [Extension Host] ExtHostCommands#executeCommand setContext
2025-03-11 18:17:28.349 [trace] [Extension Host] ExtHostCommands#executeCommand _setContext
2025-03-11 18:17:28.350 [trace] [Window] CommandService#executeCommand _setContext
2025-03-11 18:17:28.366 [trace] [Extension Host] [ms-python.python] INVOKE provider 'n=>n.provideInlayHints(E.revive(r),i,s)'
2025-03-11 18:17:28.367 [trace] [Extension Host] [InlayHints] NO inlay hints from 'ms-python.python' for range {"startLineNumber":1,"startColumn":1,"endLineNumber":1,"endColumn":1}
2025-03-11 18:17:28.367 [trace] [Extension Host] [ms-python.python] provider DONE after 1ms
2025-03-11 18:17:28.368 [trace] [Window] [DEBOUNCE: InlayHint] for file:///home/user/dev/testing/test.py is 25ms
2025-03-11 18:17:28.565 [trace] [Window] [DEBOUNCE: Document Colors] for file:///home/user/dev/testing/test.py is 1000ms
2025-03-11 18:17:28.565 [trace] [Extension Host] [vscode.git] INVOKE provider 's=>s.provideLinks(E.revive(r),i)'
2025-03-11 18:17:28.565 [trace] [Extension Host] [vscode.git] provider DONE after 0ms
2025-03-11 18:17:28.566 [trace] [Window] [DEBOUNCE: Links] for file:///home/user/dev/testing/test.py is 1000ms
2025-03-11 18:17:28.571 [trace] [Window] [backup tracker] creating backup file:///home/user/dev/testing/test.py
2025-03-11 18:17:28.571 [trace] [Window] [backup tracker] storing backup file:///home/user/dev/testing/test.py
2025-03-11 18:17:30.945 [trace] [Extension Host] [ms-python.python] INVOKE provider 'n=>n.provideInlayHints(E.revive(r),i,s)'
2025-03-11 18:17:30.947 [trace] [Extension Host] [InlayHints] NO inlay hints from 'ms-python.python' for range {"startLineNumber":1,"startColumn":1,"endLineNumber":1,"endColumn":1}
2025-03-11 18:17:30.947 [trace] [Extension Host] [ms-python.python] provider DONE after 2ms
2025-03-11 18:17:30.948 [trace] [Window] [DEBOUNCE: InlayHint] for file:///home/user/dev/testing/test.py is 25ms
2025-03-11 18:17:31.281 [trace] [Extension Host] [ms-python.python] INVOKE provider 'n=>n.provideInlayHints(E.revive(r),i,s)'
2025-03-11 18:17:31.282 [trace] [Extension Host] [InlayHints] NO inlay hints from 'ms-python.python' for range {"startLineNumber":1,"startColumn":1,"endLineNumber":1,"endColumn":1}
2025-03-11 18:17:31.282 [trace] [Extension Host] [ms-python.python] provider DONE after 1ms
2025-03-11 18:17:31.283 [trace] [Window] [DEBOUNCE: InlayHint] for file:///home/user/dev/testing/test.py is 25ms
2025-03-11 18:17:31.779 [trace] [Extension Host] [ms-python.python] INVOKE provider 'n=>n.provideInlayHints(E.revive(r),i,s)'
2025-03-11 18:17:31.781 [trace] [Extension Host] [InlayHints] NO inlay hints from 'ms-python.python' for range {"startLineNumber":1,"startColumn":1,"endLineNumber":1,"endColumn":1}
2025-03-11 18:17:31.781 [trace] [Extension Host] [ms-python.python] provider DONE after 2ms
2025-03-11 18:17:31.782 [trace] [Window] [DEBOUNCE: InlayHint] for file:///home/user/dev/testing/test.py is 25ms
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment