Skip to content

Instantly share code, notes, and snippets.

@inspirit941
Created April 24, 2023 04:27
Show Gist options
  • Save inspirit941/9a6bef9cc68a84cbc39fbd821cac9897 to your computer and use it in GitHub Desktop.
Save inspirit941/9a6bef9cc68a84cbc39fbd821cac9897 to your computer and use it in GitHub Desktop.
23.04.23 - aifactory 김태영 - LangChain 시작하기 (2).ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyMPnqQ2Swn4ZPIQoTGc4lla",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/inspirit941/9a6bef9cc68a84cbc39fbd821cac9897/23-04-23-aifactory-langchain-2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "4oBVC4BgAJO5"
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
"os.environ[\"SERPAPI_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"source": [
"!pip install langchain # 자연어 처리 및 기계 번역에 사용됩니다.\n",
"!pip install tiktoken # 텍스트 토큰화에 사용됩니다.\n",
"!pip install openai # OpenAI API와 상호작용하는 데 사용됩니다.\n",
"!pip install google-search-results # 구글 검색 결과를 가져오는 데 사용됩니다.\n",
"!pip install chromadb # 벡터 데이터베이스를 이용하는 데 사용됩니다.\n",
"!pip install faiss-cpu"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "Md-TGcE_Atd2",
"outputId": "6b8ab884-b9bb-4a10-841d-c7184937be22"
},
"execution_count": 2,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
"Collecting langchain\n",
" Downloading langchain-0.0.147-py3-none-any.whl (626 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m626.5/626.5 kB\u001b[0m \u001b[31m10.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: numpy<2,>=1 in /usr/local/lib/python3.9/dist-packages (from langchain) (1.22.4)\n",
"Requirement already satisfied: tqdm>=4.48.0 in /usr/local/lib/python3.9/dist-packages (from langchain) (4.65.0)\n",
"Collecting SQLAlchemy<2,>=1\n",
" Downloading SQLAlchemy-1.4.47-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.6 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.6/1.6 MB\u001b[0m \u001b[31m22.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting openapi-schema-pydantic<2.0,>=1.2\n",
" Downloading openapi_schema_pydantic-1.2.4-py3-none-any.whl (90 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m90.0/90.0 kB\u001b[0m \u001b[31m5.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting dataclasses-json<0.6.0,>=0.5.7\n",
" Downloading dataclasses_json-0.5.7-py3-none-any.whl (25 kB)\n",
"Requirement already satisfied: PyYAML>=5.4.1 in /usr/local/lib/python3.9/dist-packages (from langchain) (6.0)\n",
"Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in /usr/local/lib/python3.9/dist-packages (from langchain) (8.2.2)\n",
"Requirement already satisfied: requests<3,>=2 in /usr/local/lib/python3.9/dist-packages (from langchain) (2.27.1)\n",
"Collecting async-timeout<5.0.0,>=4.0.0\n",
" Downloading async_timeout-4.0.2-py3-none-any.whl (5.8 kB)\n",
"Requirement already satisfied: numexpr<3.0.0,>=2.8.4 in /usr/local/lib/python3.9/dist-packages (from langchain) (2.8.4)\n",
"Collecting aiohttp<4.0.0,>=3.8.3\n",
" Downloading aiohttp-3.8.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.0 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.0/1.0 MB\u001b[0m \u001b[31m26.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: pydantic<2,>=1 in /usr/local/lib/python3.9/dist-packages (from langchain) (1.10.7)\n",
"Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.9/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (23.1.0)\n",
"Collecting multidict<7.0,>=4.5\n",
" Downloading multidict-6.0.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (114 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m114.2/114.2 kB\u001b[0m \u001b[31m11.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: charset-normalizer<4.0,>=2.0 in /usr/local/lib/python3.9/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (2.0.12)\n",
"Collecting yarl<2.0,>=1.0\n",
" Downloading yarl-1.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (269 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m269.3/269.3 kB\u001b[0m \u001b[31m5.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting aiosignal>=1.1.2\n",
" Downloading aiosignal-1.3.1-py3-none-any.whl (7.6 kB)\n",
"Collecting frozenlist>=1.1.1\n",
" Downloading frozenlist-1.3.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (158 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m158.8/158.8 kB\u001b[0m \u001b[31m13.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting marshmallow<4.0.0,>=3.3.0\n",
" Downloading marshmallow-3.19.0-py3-none-any.whl (49 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.1/49.1 kB\u001b[0m \u001b[31m1.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting typing-inspect>=0.4.0\n",
" Downloading typing_inspect-0.8.0-py3-none-any.whl (8.7 kB)\n",
"Collecting marshmallow-enum<2.0.0,>=1.5.1\n",
" Downloading marshmallow_enum-1.5.1-py2.py3-none-any.whl (4.2 kB)\n",
"Requirement already satisfied: typing-extensions>=4.2.0 in /usr/local/lib/python3.9/dist-packages (from pydantic<2,>=1->langchain) (4.5.0)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.9/dist-packages (from requests<3,>=2->langchain) (2022.12.7)\n",
"Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.9/dist-packages (from requests<3,>=2->langchain) (3.4)\n",
"Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.9/dist-packages (from requests<3,>=2->langchain) (1.26.15)\n",
"Requirement already satisfied: greenlet!=0.4.17 in /usr/local/lib/python3.9/dist-packages (from SQLAlchemy<2,>=1->langchain) (2.0.2)\n",
"Requirement already satisfied: packaging>=17.0 in /usr/local/lib/python3.9/dist-packages (from marshmallow<4.0.0,>=3.3.0->dataclasses-json<0.6.0,>=0.5.7->langchain) (23.1)\n",
"Collecting mypy-extensions>=0.3.0\n",
" Downloading mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)\n",
"Installing collected packages: SQLAlchemy, mypy-extensions, multidict, marshmallow, frozenlist, async-timeout, yarl, typing-inspect, openapi-schema-pydantic, marshmallow-enum, aiosignal, dataclasses-json, aiohttp, langchain\n",
" Attempting uninstall: SQLAlchemy\n",
" Found existing installation: SQLAlchemy 2.0.9\n",
" Uninstalling SQLAlchemy-2.0.9:\n",
" Successfully uninstalled SQLAlchemy-2.0.9\n",
"Successfully installed SQLAlchemy-1.4.47 aiohttp-3.8.4 aiosignal-1.3.1 async-timeout-4.0.2 dataclasses-json-0.5.7 frozenlist-1.3.3 langchain-0.0.147 marshmallow-3.19.0 marshmallow-enum-1.5.1 multidict-6.0.4 mypy-extensions-1.0.0 openapi-schema-pydantic-1.2.4 typing-inspect-0.8.0 yarl-1.9.1\n",
"Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
"Collecting tiktoken\n",
" Downloading tiktoken-0.3.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.7 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.7/1.7 MB\u001b[0m \u001b[31m30.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: regex>=2022.1.18 in /usr/local/lib/python3.9/dist-packages (from tiktoken) (2022.10.31)\n",
"Requirement already satisfied: requests>=2.26.0 in /usr/local/lib/python3.9/dist-packages (from tiktoken) (2.27.1)\n",
"Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.9/dist-packages (from requests>=2.26.0->tiktoken) (3.4)\n",
"Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.9/dist-packages (from requests>=2.26.0->tiktoken) (2.0.12)\n",
"Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.9/dist-packages (from requests>=2.26.0->tiktoken) (1.26.15)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.9/dist-packages (from requests>=2.26.0->tiktoken) (2022.12.7)\n",
"Installing collected packages: tiktoken\n",
"Successfully installed tiktoken-0.3.3\n",
"Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
"Collecting openai\n",
" Downloading openai-0.27.4-py3-none-any.whl (70 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m70.3/70.3 kB\u001b[0m \u001b[31m4.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: requests>=2.20 in /usr/local/lib/python3.9/dist-packages (from openai) (2.27.1)\n",
"Requirement already satisfied: aiohttp in /usr/local/lib/python3.9/dist-packages (from openai) (3.8.4)\n",
"Requirement already satisfied: tqdm in /usr/local/lib/python3.9/dist-packages (from openai) (4.65.0)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.9/dist-packages (from requests>=2.20->openai) (2022.12.7)\n",
"Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.9/dist-packages (from requests>=2.20->openai) (3.4)\n",
"Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.9/dist-packages (from requests>=2.20->openai) (2.0.12)\n",
"Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.9/dist-packages (from requests>=2.20->openai) (1.26.15)\n",
"Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (1.9.1)\n",
"Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (4.0.2)\n",
"Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (1.3.1)\n",
"Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (6.0.4)\n",
"Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (23.1.0)\n",
"Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (1.3.3)\n",
"Installing collected packages: openai\n",
"Successfully installed openai-0.27.4\n",
"Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
"Collecting google-search-results\n",
" Downloading google_search_results-2.4.2.tar.gz (18 kB)\n",
" Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
"Requirement already satisfied: requests in /usr/local/lib/python3.9/dist-packages (from google-search-results) (2.27.1)\n",
"Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.9/dist-packages (from requests->google-search-results) (1.26.15)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.9/dist-packages (from requests->google-search-results) (2022.12.7)\n",
"Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.9/dist-packages (from requests->google-search-results) (3.4)\n",
"Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.9/dist-packages (from requests->google-search-results) (2.0.12)\n",
"Building wheels for collected packages: google-search-results\n",
" Building wheel for google-search-results (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
" Created wheel for google-search-results: filename=google_search_results-2.4.2-py3-none-any.whl size=32019 sha256=9ed5896f1039313a8da3fa5c168dc0f94916910b5501800c70f471f2e272acf3\n",
" Stored in directory: /root/.cache/pip/wheels/68/8e/73/744b7d9d7ac618849d93081a20e1c0deccd2aef90901c9f5a9\n",
"Successfully built google-search-results\n",
"Installing collected packages: google-search-results\n",
"Successfully installed google-search-results-2.4.2\n",
"Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
"Collecting chromadb\n",
" Downloading chromadb-0.3.21-py3-none-any.whl (46 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m46.4/46.4 kB\u001b[0m \u001b[31m3.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting sentence-transformers>=2.2.2\n",
" Downloading sentence-transformers-2.2.2.tar.gz (85 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m86.0/86.0 kB\u001b[0m \u001b[31m6.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
"Collecting uvicorn[standard]>=0.18.3\n",
" Downloading uvicorn-0.21.1-py3-none-any.whl (57 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m57.8/57.8 kB\u001b[0m \u001b[31m6.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting requests>=2.28\n",
" Downloading requests-2.28.2-py3-none-any.whl (62 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m62.8/62.8 kB\u001b[0m \u001b[31m6.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting clickhouse-connect>=0.5.7\n",
" Downloading clickhouse_connect-0.5.20-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (927 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m927.8/927.8 kB\u001b[0m \u001b[31m29.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting posthog>=2.4.0\n",
" Downloading posthog-3.0.1-py2.py3-none-any.whl (37 kB)\n",
"Collecting fastapi>=0.85.1\n",
" Downloading fastapi-0.95.1-py3-none-any.whl (56 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m57.0/57.0 kB\u001b[0m \u001b[31m6.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting hnswlib>=0.7\n",
" Downloading hnswlib-0.7.0.tar.gz (33 kB)\n",
" Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n",
" Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n",
" Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
"Requirement already satisfied: pydantic>=1.9 in /usr/local/lib/python3.9/dist-packages (from chromadb) (1.10.7)\n",
"Requirement already satisfied: numpy>=1.21.6 in /usr/local/lib/python3.9/dist-packages (from chromadb) (1.22.4)\n",
"Requirement already satisfied: pandas>=1.3 in /usr/local/lib/python3.9/dist-packages (from chromadb) (1.5.3)\n",
"Requirement already satisfied: duckdb>=0.7.1 in /usr/local/lib/python3.9/dist-packages (from chromadb) (0.7.1)\n",
"Collecting lz4\n",
" Downloading lz4-4.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m27.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: certifi in /usr/local/lib/python3.9/dist-packages (from clickhouse-connect>=0.5.7->chromadb) (2022.12.7)\n",
"Requirement already satisfied: urllib3>=1.26 in /usr/local/lib/python3.9/dist-packages (from clickhouse-connect>=0.5.7->chromadb) (1.26.15)\n",
"Collecting zstandard\n",
" Downloading zstandard-0.21.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.7 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.7/2.7 MB\u001b[0m \u001b[31m72.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: pytz in /usr/local/lib/python3.9/dist-packages (from clickhouse-connect>=0.5.7->chromadb) (2022.7.1)\n",
"Collecting starlette<0.27.0,>=0.26.1\n",
" Downloading starlette-0.26.1-py3-none-any.whl (66 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m66.9/66.9 kB\u001b[0m \u001b[31m8.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: python-dateutil>=2.8.1 in /usr/local/lib/python3.9/dist-packages (from pandas>=1.3->chromadb) (2.8.2)\n",
"Collecting monotonic>=1.5\n",
" Downloading monotonic-1.6-py2.py3-none-any.whl (8.2 kB)\n",
"Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.9/dist-packages (from posthog>=2.4.0->chromadb) (1.16.0)\n",
"Collecting backoff>=1.10.0\n",
" Downloading backoff-2.2.1-py3-none-any.whl (15 kB)\n",
"Requirement already satisfied: typing-extensions>=4.2.0 in /usr/local/lib/python3.9/dist-packages (from pydantic>=1.9->chromadb) (4.5.0)\n",
"Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.9/dist-packages (from requests>=2.28->chromadb) (2.0.12)\n",
"Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.9/dist-packages (from requests>=2.28->chromadb) (3.4)\n",
"Collecting transformers<5.0.0,>=4.6.0\n",
" Downloading transformers-4.28.1-py3-none-any.whl (7.0 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m7.0/7.0 MB\u001b[0m \u001b[31m87.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: tqdm in /usr/local/lib/python3.9/dist-packages (from sentence-transformers>=2.2.2->chromadb) (4.65.0)\n",
"Requirement already satisfied: torch>=1.6.0 in /usr/local/lib/python3.9/dist-packages (from sentence-transformers>=2.2.2->chromadb) (2.0.0+cu118)\n",
"Requirement already satisfied: torchvision in /usr/local/lib/python3.9/dist-packages (from sentence-transformers>=2.2.2->chromadb) (0.15.1+cu118)\n",
"Requirement already satisfied: scikit-learn in /usr/local/lib/python3.9/dist-packages (from sentence-transformers>=2.2.2->chromadb) (1.2.2)\n",
"Requirement already satisfied: scipy in /usr/local/lib/python3.9/dist-packages (from sentence-transformers>=2.2.2->chromadb) (1.10.1)\n",
"Requirement already satisfied: nltk in /usr/local/lib/python3.9/dist-packages (from sentence-transformers>=2.2.2->chromadb) (3.8.1)\n",
"Collecting sentencepiece\n",
" Downloading sentencepiece-0.1.98-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m69.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting huggingface-hub>=0.4.0\n",
" Downloading huggingface_hub-0.13.4-py3-none-any.whl (200 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m200.1/200.1 kB\u001b[0m \u001b[31m23.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting h11>=0.8\n",
" Downloading h11-0.14.0-py3-none-any.whl (58 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m6.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: click>=7.0 in /usr/local/lib/python3.9/dist-packages (from uvicorn[standard]>=0.18.3->chromadb) (8.1.3)\n",
"Collecting watchfiles>=0.13\n",
" Downloading watchfiles-0.19.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m51.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting httptools>=0.5.0\n",
" Downloading httptools-0.5.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (417 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m417.9/417.9 kB\u001b[0m \u001b[31m39.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting uvloop!=0.15.0,!=0.15.1,>=0.14.0\n",
" Downloading uvloop-0.17.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.2 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m4.2/4.2 MB\u001b[0m \u001b[31m57.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting python-dotenv>=0.13\n",
" Downloading python_dotenv-1.0.0-py3-none-any.whl (19 kB)\n",
"Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.9/dist-packages (from uvicorn[standard]>=0.18.3->chromadb) (6.0)\n",
"Collecting websockets>=10.4\n",
" Downloading websockets-11.0.2-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (129 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m129.7/129.7 kB\u001b[0m \u001b[31m16.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: filelock in /usr/local/lib/python3.9/dist-packages (from huggingface-hub>=0.4.0->sentence-transformers>=2.2.2->chromadb) (3.11.0)\n",
"Requirement already satisfied: packaging>=20.9 in /usr/local/lib/python3.9/dist-packages (from huggingface-hub>=0.4.0->sentence-transformers>=2.2.2->chromadb) (23.1)\n",
"Requirement already satisfied: anyio<5,>=3.4.0 in /usr/local/lib/python3.9/dist-packages (from starlette<0.27.0,>=0.26.1->fastapi>=0.85.1->chromadb) (3.6.2)\n",
"Requirement already satisfied: triton==2.0.0 in /usr/local/lib/python3.9/dist-packages (from torch>=1.6.0->sentence-transformers>=2.2.2->chromadb) (2.0.0)\n",
"Requirement already satisfied: sympy in /usr/local/lib/python3.9/dist-packages (from torch>=1.6.0->sentence-transformers>=2.2.2->chromadb) (1.11.1)\n",
"Requirement already satisfied: networkx in /usr/local/lib/python3.9/dist-packages (from torch>=1.6.0->sentence-transformers>=2.2.2->chromadb) (3.1)\n",
"Requirement already satisfied: jinja2 in /usr/local/lib/python3.9/dist-packages (from torch>=1.6.0->sentence-transformers>=2.2.2->chromadb) (3.1.2)\n",
"Requirement already satisfied: cmake in /usr/local/lib/python3.9/dist-packages (from triton==2.0.0->torch>=1.6.0->sentence-transformers>=2.2.2->chromadb) (3.25.2)\n",
"Requirement already satisfied: lit in /usr/local/lib/python3.9/dist-packages (from triton==2.0.0->torch>=1.6.0->sentence-transformers>=2.2.2->chromadb) (16.0.1)\n",
"Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.9/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers>=2.2.2->chromadb) (2022.10.31)\n",
"Collecting tokenizers!=0.11.3,<0.14,>=0.11.1\n",
" Downloading tokenizers-0.13.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.8 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m7.8/7.8 MB\u001b[0m \u001b[31m68.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: joblib in /usr/local/lib/python3.9/dist-packages (from nltk->sentence-transformers>=2.2.2->chromadb) (1.2.0)\n",
"Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.9/dist-packages (from scikit-learn->sentence-transformers>=2.2.2->chromadb) (3.1.0)\n",
"Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /usr/local/lib/python3.9/dist-packages (from torchvision->sentence-transformers>=2.2.2->chromadb) (8.4.0)\n",
"Requirement already satisfied: sniffio>=1.1 in /usr/local/lib/python3.9/dist-packages (from anyio<5,>=3.4.0->starlette<0.27.0,>=0.26.1->fastapi>=0.85.1->chromadb) (1.3.0)\n",
"Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.9/dist-packages (from jinja2->torch>=1.6.0->sentence-transformers>=2.2.2->chromadb) (2.1.2)\n",
"Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.9/dist-packages (from sympy->torch>=1.6.0->sentence-transformers>=2.2.2->chromadb) (1.3.0)\n",
"Building wheels for collected packages: hnswlib, sentence-transformers\n",
" Building wheel for hnswlib (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
" Created wheel for hnswlib: filename=hnswlib-0.7.0-cp39-cp39-linux_x86_64.whl size=2118374 sha256=6a2121822a48ba8700ddd243d7306833d4f01c5afd205a64c89726ae2a599eee\n",
" Stored in directory: /root/.cache/pip/wheels/ba/26/61/fface6c407f56418b3140cd7645917f20ba6b27d4e32b2bd20\n",
" Building wheel for sentence-transformers (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
" Created wheel for sentence-transformers: filename=sentence_transformers-2.2.2-py3-none-any.whl size=125942 sha256=7af7c77db43926884de86451c8298cb7dcb8ec1181188b3138cadbe0c22ab880\n",
" Stored in directory: /root/.cache/pip/wheels/71/67/06/162a3760c40d74dd40bc855d527008d26341c2b0ecf3e8e11f\n",
"Successfully built hnswlib sentence-transformers\n",
"Installing collected packages: tokenizers, sentencepiece, monotonic, zstandard, websockets, uvloop, requests, python-dotenv, lz4, httptools, hnswlib, h11, backoff, watchfiles, uvicorn, starlette, posthog, huggingface-hub, clickhouse-connect, transformers, fastapi, sentence-transformers, chromadb\n",
" Attempting uninstall: requests\n",
" Found existing installation: requests 2.27.1\n",
" Uninstalling requests-2.27.1:\n",
" Successfully uninstalled requests-2.27.1\n",
"Successfully installed backoff-2.2.1 chromadb-0.3.21 clickhouse-connect-0.5.20 fastapi-0.95.1 h11-0.14.0 hnswlib-0.7.0 httptools-0.5.0 huggingface-hub-0.13.4 lz4-4.3.2 monotonic-1.6 posthog-3.0.1 python-dotenv-1.0.0 requests-2.28.2 sentence-transformers-2.2.2 sentencepiece-0.1.98 starlette-0.26.1 tokenizers-0.13.3 transformers-4.28.1 uvicorn-0.21.1 uvloop-0.17.0 watchfiles-0.19.0 websockets-11.0.2 zstandard-0.21.0\n",
"Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
"Collecting faiss-cpu\n",
" Downloading faiss_cpu-1.7.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.0 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m17.0/17.0 MB\u001b[0m \u001b[31m23.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hInstalling collected packages: faiss-cpu\n",
"Successfully installed faiss-cpu-1.7.3\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"## Agent 시작하기\n",
"\n",
"agent 역할: 주어진 tool에서 특정 질문이나 명령이 왔을 때, 어떤 tool이 명령 수행애 적합한지 파악하고 실행하는 것.\n",
"\n",
"- 따라서 tool부터 정의하고, tool 중 llm 모델이 필요한 경우 llm을 파라미터로 넣는다."
],
"metadata": {
"id": "UtMqqEV2A1qT"
}
},
{
"cell_type": "code",
"source": [
"from langchain.agents import load_tools\n",
"from langchain.agents import initialize_agent\n",
"from langchain.agents import AgentType\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"tools = load_tools([\"serpapi\",\"llm-math\"], llm=llm) ## 구글 search api, 수학을 풀 수 있는 llm 모델\n",
"agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose = True)\n",
"agent.run(\"Who is Leo Dicaprio's girlfriend? What is her current age raised to the 0.43 power?\")\n",
"\n",
"## tools에 정의한 llm 파라미터와 agent에서 정의한 llm 파라미터는 같은 객체를 참조하지만 용도는 다르다.\n",
"## 각각 tool 돌릴 때 사용할 llm과 agent 돌릴 때 사용할 llm.\n",
"## zero shot react description: 주어진 tool 중 agent에 적합한 게 뭔지 찾아서 리턴해주는 역할"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 324
},
"id": "qEcaexT7Aukr",
"outputId": "f7db1b35-4d47-4a51-92fc-3749f76e1c75"
},
"execution_count": 3,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Leo Dicaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
"Action: Search\n",
"Action Input: \"Leo Dicaprio's girlfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThey went on multiple vacations together, including venturing to Montauk and a private island in the Bahamas. Leonardo DiCaprio started dating 20-year-old model Camila Morrone in late 2017. They're still together today - and she's 22 now.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate her age raised to the 0.43 power.\n",
"Action: Calculator\n",
"Action Input: 22^0.43\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 3.777824273683966\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Camila Morrone is Leo Dicaprio's girlfriend and her current age raised to the 0.43 power is 3.777824273683966.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"\"Camila Morrone is Leo Dicaprio's girlfriend and her current age raised to the 0.43 power is 3.777824273683966.\""
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 3
}
]
},
{
"cell_type": "markdown",
"source": [
"질문 인식 후, 필요한 작업 수행\n",
"- Actions: search -> agent가 파악한 작업 해결을 위한 action.\n",
" - observation: 레오나르도 디카프리오 여자친구의 나이를 검색한 결과\n",
" - Thought: 다음에 해야 할 작업이 무엇인지 ai가 내린 결정.\n",
" - \bhttps://www.youtube.com/watch?v=EekMq9cf1PQ 의 task 수행은 검색을 두 번 수행함. 이름 1번, 나이 1번. 그런데 내 코드에서는 검색을 한 번만 수행해서 이름과 나이를 찾음.\n",
" - 문제는, 강연예시와 내 코드의 응답값이 다름. 강연에서는 27로 조회했고, 내 코드는 22로 조회했다.\n",
"- Actions: calculator 실행."
],
"metadata": {
"id": "yoradpxHEjHD"
}
},
{
"cell_type": "markdown",
"source": [
"## 도구\n",
"도구는 일반적인 유틸리티(예: 검색), 다른 체인, 또는 다른 에이전트일 수 있습니다.\n",
"\n",
"```py\n",
"from langchain.agents import load_tools \n",
"tool_names = [...] \n",
"tools = load_tools(tool_names)\n",
"```\n",
"\n",
"일부 도구(예: 체인, 에이전트)는 초기화하기 위해 기본 LLM(Language Learning Model)이 필요할 수 있습니다. 이 경우, 다음과 같이 LLM을 전달할 수 있습니다.\n",
"\n",
"```py\n",
"from langchain.agents import load_tools \n",
"tool_names = [...] \n",
"llm = ... \n",
"tools = load_tools(tool_names, llm=llm)\n",
"```\n",
"\n",
"아래에 지원되는 모든 도구와 관련 정보의 목록이 있습니다:\n",
"\n",
"- 도구 이름(Tool Name): LLM이 도구를 참조하는 이름입니다.\n",
"- 도구 설명(Tool Description): LLM에 전달되는 도구 설명입니다.\n",
"- 참고 사항(Notes): LLM에 전달되지 않는 도구에 대한 참고 사항입니다.\n",
"- LLM 필요(Requires LLM): 이 도구가 초기화될 때 LLM이 필요한지 여부입니다. \n",
"\n",
"(선택 사항) \n",
"- 추가 매개변수(Optional Extra Parameters): 이 도구를 초기화하는 데 필요한 추가 매개변수입니다.\n",
"\n",
"이러한 도구를 사용하면 에이전트가 여러 작업을 수행하고, 다양한 환경에서 문제를 해결할 수 있는 능력이 향상됩니다. 도구를 사용하여 에이전트의 기능을 확장하고, 다양한 요구 사항에 대응할 수 있도록 합니다.\n",
"\n",
"> ## python_repl\n",
"> - Tool Name: Python REPL\n",
"> - Tool Description: A Python shell. Use this to execute python commands. Input should be a valid python command. If you expect output it should be printed out.\n",
"> - Notes: Maintains state.\n",
"> - Requires LLM: No\n",
"\n",
"> ## serpapi\n",
"> - Tool Name: Search\n",
"> - Tool Description: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\n",
"> - Notes: Calls the Serp API and then parses results.\n",
"> - Requires LLM: No\n",
"\n",
"\n",
"> ## llm-math\n",
"> - Tool Name: Calculator\n",
"> - Tool Description: Useful for when you need to answer questions about math.\n",
"> - Notes: An instance of the LLMMath chain.\n",
"> - Requires LLM: Yes"
],
"metadata": {
"id": "V_UZLHJrGreV"
}
},
{
"cell_type": "markdown",
"source": [
"### 도구 생성하기\n",
"\n",
"#### 도구 데이터클래스 이용방법"
],
"metadata": {
"id": "vt__MVDjHc-V"
}
},
{
"cell_type": "code",
"source": [
"# Import things that are needed generically\n",
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.agents import AgentType\n",
"from langchain.tools import BaseTool\n",
"from langchain.llms import OpenAI\n",
"from langchain import LLMMathChain, SerpAPIWrapper\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"\n",
"# Load the tool configs that are needed.\n",
"search = SerpAPIWrapper()\n",
"llm_math_chain = LLMMathChain(llm=llm, verbose=True)\n",
"\n",
"## llm Model에서는 description을 보고\u001d 적절한 tool을 선택하는 거라고 함\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\"\n",
" ),\n",
" Tool(\n",
" name=\"Calculator\",\n",
" func=llm_math_chain.run,\n",
" description=\"useful for when you need to answer questions about math\"\n",
" )\n",
"]\n",
"\n",
"# Construct the agent. We will use the default agent type here.\n",
"# See documentation for a full list of options.\n",
"agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\n",
"\n",
"agent.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 592
},
"id": "dVGqvVGlBiLj",
"outputId": "3a9b2047-3701-44a7-ce4f-686a20072a99"
},
"execution_count": 4,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
"Action: Search\n",
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAfter rumours of a romance with Gigi Hadid, the Oscar winner has seemingly moved on. First being linked to the television personality in September 2022, it appears as if his \"age bracket\" has moved up. This follows his rumoured relationship with mere 19-year-old Eden Polani.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Eden Polani's age\n",
"Action: Search\n",
"Action Input: \"Eden Polani age\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m19 years old\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now need to calculate her age raised to the 0.43 power\n",
"Action: Calculator\n",
"Action Input: 19^0.43\u001b[0m\n",
"\n",
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
"19^0.43\u001b[32;1m\u001b[1;3m\n",
"```text\n",
"19**0.43\n",
"```\n",
"...numexpr.evaluate(\"19**0.43\")...\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m3.547023357958959\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 3.547023357958959\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Leo DiCaprio's girlfriend is Eden Polani and her current age raised to the 0.43 power is 3.547023357958959.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"\"Leo DiCaprio's girlfriend is Eden Polani and her current age raised to the 0.43 power is 3.547023357958959.\""
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 4
}
]
},
{
"cell_type": "markdown",
"source": [
"#### Python Decorator 활용"
],
"metadata": {
"id": "fZcDMDsyH88X"
}
},
{
"cell_type": "code",
"source": [
"from langchain.agents import tool\n",
"\n",
"@tool(\"search\", return_direct=True)\n",
"def search_api(query: str) -> str:\n",
" \"\"\"Searches the API for the query.\"\"\"\n",
" return \"Results\""
],
"metadata": {
"id": "ps6zb_1CHzfh"
},
"execution_count": 6,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"예시: aifactory의 task와 연결하기\n",
"- 실제로 동작하는 건 아니고, def 내부에 정의한 문자열 description을 읽고 agent가 적절한 tool을 선택하는 걸 보여줌"
],
"metadata": {
"id": "NbUmnoPZIEKU"
}
},
{
"cell_type": "code",
"source": [
"@tool(\"스팸 문장 판별\")\n",
"def classify_spam_api(query: str) -> str:\n",
" \"\"\"주어진 질문에 대해 스팸 문장인지 아닌 지 판별한다.\"\"\"\n",
" return \"get result using https://aifactory.space/competition/detail/2311\"\n",
"\n",
"@tool(\"강아지 간비대 증상 판별\")\n",
"def classify_dog_api(query: str) -> str:\n",
" \"\"\"입력된 이미지에 대해 강아지 간비대 증상 여부 판별한다.\"\"\"\n",
" return \"get result using https://aifactory.space/competition/detail/2267\"\n",
"\n",
"@tool(\"고서한자이미지 분석\")\n",
"def classify_hanja_api(query: str) -> str:\n",
" \"\"\"입력된 이미지에 대해 고서 한자를 분석한다.\"\"\"\n",
" return \"get result using https://aifactory.space/competition/play/2269\"\n",
"\n",
"tools = [classify_spam_api, classify_dog_api, classify_hanja_api]\n",
"agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\n",
"\n",
"agent.run(\"이 이미지에 있는 한자를 분석하고, 문장이 스팸인지 아닌지 알려줘\")"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 304
},
"id": "B5_71GbXIGTt",
"outputId": "9419871b-527c-4597-e185-7cb5ac0fc26b"
},
"execution_count": 7,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m 이미지에 있는 한자를 분석하고, 문장이 스팸인지 아닌지 판별해야 한다.\n",
"Action: 고서한자이미지 분석\n",
"Action Input: 이미지\u001b[0m\n",
"Observation: \u001b[38;5;200m\u001b[1;3mget result using https://aifactory.space/competition/play/2269\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m 이미지에 있는 한자를 분석했으므로, 문장이 스팸인지 아닌지 판별해야 한다.\n",
"Action: 스팸 문장 판별\n",
"Action Input: 문장\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mget result using https://aifactory.space/competition/detail/2311\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m 스팸 문장 판별 결과를 통해 문장이 스팸인지 아닌지 판별할 수 있다.\n",
"Final Answer: 문장이 스팸인지 아닌지 판별할 수 있다.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"'문장이 스팸인지 아닌지 판별할 수 있다.'"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 7
}
]
},
{
"cell_type": "markdown",
"source": [
"#### 도구 간 우선순위 설정하기\n",
"\n"
],
"metadata": {
"id": "lnMzbu-wJsIr"
}
},
{
"cell_type": "markdown",
"source": [
"아래와 같이 도구 설명에 추가하자\n",
"\n",
"Use this more than the normal search if the question is about Music, like 'who is the singer of yesterday?' or 'what is the most popular song in 2022?'\n",
"\n",
"- 일반적인 개발 프레임워크면 함수 파라미터에 order 같은 우선순위 로직을 추가할텐데, 여긴 description을 보충설명함\n",
"- '음악 관련한 검색에는 이 tool을 우선해달라'고 입력하는 식."
],
"metadata": {
"id": "YMVy1MdCJwKv"
}
},
{
"cell_type": "code",
"source": [
"# Import things that are needed generically\n",
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.agents import AgentType\n",
"from langchain.llms import OpenAI\n",
"from langchain import LLMMathChain, SerpAPIWrapper\n",
"search = SerpAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\"\n",
" ),\n",
" Tool(\n",
" name=\"Music Search\",\n",
" func=lambda x: \"'All I Want For Christmas Is You' by Mariah Carey.\", #Mock Function\n",
" description=\"A Music search engine. Use this more than the normal search if the question is about Music, like 'who is the singer of yesterday?' or 'what is the most popular song in 2022?'\",\n",
" )\n",
"]\n",
"\n",
"agent = initialize_agent(tools, OpenAI(temperature=0), agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\n",
"\n",
"agent.run(\"what is the most famous song of christmas\")"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 232
},
"id": "vljXwrfeIKfe",
"outputId": "aab5e094-91bd-45d8-98c3-3b09717d1638"
},
"execution_count": 8,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I should use a music search engine to find the answer\n",
"Action: Music Search\n",
"Action Input: most famous song of christmas\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m'All I Want For Christmas Is You' by Mariah Carey.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: 'All I Want For Christmas Is You' by Mariah Carey.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"\"'All I Want For Christmas Is You' by Mariah Carey.\""
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 8
}
]
},
{
"cell_type": "markdown",
"source": [
"### 에이전트 유형(Agent Types) 살펴보기\n",
"\n",
"에이전트는 LLM(Language Learning Model)을 사용하여 어떤 작업을 수행하고 어떤 순서로 수행할지 결정합니다. 작업은 도구를 사용하고 그 출력을 관찰하거나 사용자에게 응답을 반환하는 것입니다. \n",
"\n",
"#### zero-shot-react-description \n",
"\n",
"- **Tool description만으로 agent가 어떤 tool을 사용할지 결정한다.**\n",
"- 임의의 개수의 도구를 제공할 수 있습니다. \n",
"- **tool description이 필수**\n",
"\n",
"#### react-docstore \n",
"\n",
"- 이 에이전트는 ReAct 프레임워크를 사용하여 문서 저장소와 상호작용합니다. \n",
"- 검색 도구와 조회 도구 두 가지 도구를 제공해야 합니다(정확한 이름으로 명명해야 합니다). \n",
"- 검색 도구는 문서를 검색해야 하며, 조회 도구는 최근 찾은 문서에서 용어를 조회해야 합니다. \n",
"- 이 에이전트는 원래의 ReAct 논문, 특히 위키백과 예시와 동일합니다.\n",
"\n",
"#### self-ask-with-search \n",
"\n",
"- 이 에이전트는 Intermediate Answer라는 이름의 단일 도구를 사용합니다. \n",
"- 이 도구는 질문에 대한 사실적인 답변을 조회할 수 있어야 합니다. \n",
"- 이 에이전트는 원래의 셀프 애스크 위드 서치 논문과 동일하며, 구글 검색 API가 도구로 제공되었습니다.\n",
"\n",
"#### conversational-react-description \n",
"\n",
"- 이 에이전트는 대화형 설정에서 사용하도록 설계되었습니다. \n",
"- 프롬프트는 에이전트가 도움이 되고 대화적이도록 만듭니다. \n",
"- 이 에이전트는 ReAct 프레임워크를 사용하여 어떤 도구를 사용할지 결정하고, 이전 대화 상호작용을 기억하기 위해 메모리를 사용합니다.\n",
"\n",
"### ReAct(Reasoning and Action)란?\n",
"\n",
"- ReAct(Reasoning and Action) 프레임워크는 인공지능(AI) 에이전트가 자연어 처리(NLP) 기반으로 도구를 선택하고, 해당 도구를 사용하여 작업을 수행할 수 있게 하는 기술입니다. \n",
"- 이 프레임워크는 에이전트가 주어진 자연어 입력을 해석하고, 적절한 도구를 선택하여 사용자의 질문에 응답하거나 특정 작업을 완료하는데 도움이 됩니다.\n",
"\n",
"ReAct 프레임워크의 핵심 아이디어는 다양한 도구(데이터베이스 조회, 검색 엔진, 기타 에이전트 등)를 사용하여 에이전트의 기능을 확장하고, 에이전트가 주어진 문맥에 따라 적절한 도구를 선택할 수 있도록 하는 것입니다.\n",
"\n",
"ReAct 프레임워크의 구성 요소는 대체로 다음과 같습니다:\n",
"\n",
"- 도구 설명: 각 도구에 대한 설명을 제공하여, 에이전트가 도구의 기능과 사용 사례를 이해할 수 있도록 합니다.\n",
"- LLM(Language Learning Model): 자연어 입력을 해석하고, 적절한 도구를 선택하며, 도구를 사용하여 작업을 수행할 수 있도록 지원합니다.\n",
"- 도구 선택 및 실행: 에이전트가 도구를 선택하고 실행하는 메커니즘으로, 에이전트는 자연어 입력에 따라 도구를 선택하고 실행합니다.\n",
"\n",
"ReAct 프레임워크를 사용하는 에이전트는 다양한 상황에서 효과적으로 작동할 수 있으며, 다양한 도구를 사용하여 복잡한 문제를 해결할 수 있습니다. 이를 통해 에이전트는 유연하게 대응하고, 다양한 요구 사항과 작업에 적응할 수 있습니다.\n"
],
"metadata": {
"id": "65G-9JzaMZ75"
}
},
{
"cell_type": "markdown",
"source": [
"#### ReAct 에 정의된 프롬프트 템플릿 코드\n",
"\n",
"```py\n",
"from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser\n",
"from langchain.prompts import StringPromptTemplate\n",
"from langchain import OpenAI, SerpAPIWrapper, LLMChain\n",
"from typing import List, Union\n",
"from langchain.schema import AgentAction, AgentFinish\n",
"import re\n",
"\n",
"# Set up the base template\n",
"template = \"\"\"Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\n",
"\n",
"{tools}\n",
"\n",
"Use the following format:\n",
"\n",
"Question: the input question you must answer\n",
"Thought: you should always think about what to do\n",
"Action: the action to take, should be one of [{tool_names}]\n",
"Action Input: the input to the action\n",
"Observation: the result of the action\n",
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
"Thought: I now know the final answer\n",
"Final Answer: the final answer to the original input question\n",
"\n",
"Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Arg\"s\n",
"\n",
"Question: {input}\n",
"{agent_scratchpad}\"\"\"\n",
"\n",
"# Set up a prompt template\n",
"class CustomPromptTemplate(StringPromptTemplate):\n",
" # The template to use\n",
" template: str\n",
" # The list of tools available\n",
" tools: List[Tool]\n",
" \n",
" def format(self, **kwargs) -> str:\n",
" # Get the intermediate steps (AgentAction, Observation tuples)\n",
" # Format them in a particular way\n",
" intermediate_steps = kwargs.pop(\"intermediate_steps\")\n",
" thoughts = \"\"\n",
" for action, observation in intermediate_steps:\n",
" thoughts += action.log\n",
" thoughts += f\"\\nObservation: {observation}\\nThought: \"\n",
" # Set the agent_scratchpad variable to that value\n",
" kwargs[\"agent_scratchpad\"] = thoughts\n",
" # Create a tools variable from the list of tools provided\n",
" kwargs[\"tools\"] = \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in self.tools])\n",
" # Create a list of tool names for the tools provided\n",
" kwargs[\"tool_names\"] = \", \".join([tool.name for tool in self.tools])\n",
" return self.template.format(**kwargs)\n",
"\n",
"prompt = CustomPromptTemplate(\n",
" template=template,\n",
" tools=tools,\n",
" # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically\n",
" # This includes the `intermediate_steps` variable because that is needed\n",
" input_variables=[\"input\", \"intermediate_steps\"]\n",
")\n",
"```\n",
"\n",
"\n",
"- prompt로만 명령어를 주었을 때 동작한다... = \"의도와 다르게 동작할 수 있다\" (의도와 다르게 동작하지 않을 거라고 보장할 수 없다)\n",
" - 따라서 prompt로 만들어진 코드는 충분한 테스트를 거쳐야 함\n",
" - 출력 결과가 '의도한 것'의 범주를 넘었다고 판단할 경우 출력하지 않거나 다시 시도하도록 하는 로직이 있어야 할 것 같음 (outputParser)\n"
],
"metadata": {
"id": "AtvUNHsBNKx8"
}
},
{
"cell_type": "markdown",
"source": [],
"metadata": {
"id": "lHwkbKkfNS9p"
}
},
{
"cell_type": "markdown",
"source": [
"## 사용사례\n",
"\n",
"### 개인비서 - BabyAGI\n",
"\n",
"task를 던져 주면, 이 task를 어떻게 해결해 나갈 것인지 plan을 만들어준다."
],
"metadata": {
"id": "dms-kehOOBL4"
}
},
{
"cell_type": "code",
"source": [
"import os\n",
"from collections import deque\n",
"from typing import Dict, List, Optional, Any\n",
"\n",
"from langchain import LLMChain, OpenAI, PromptTemplate\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.llms import BaseLLM\n",
"from langchain.vectorstores.base import VectorStore\n",
"from pydantic import BaseModel, Field\n",
"from langchain.chains.base import Chain"
],
"metadata": {
"id": "XAOvETPBJy_A"
},
"execution_count": 9,
"outputs": []
},
{
"cell_type": "code",
"source": [
"from langchain.vectorstores import FAISS ## vector DB\n",
"from langchain.docstore import InMemoryDocstore # in-memory"
],
"metadata": {
"id": "feqA3ZB9OIO_"
},
"execution_count": 10,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Define your embedding model\n",
"embeddings_model = OpenAIEmbeddings()\n",
"# Initialize the vectorstore as empty\n",
"import faiss\n",
"\n",
"embedding_size = 1536\n",
"index = faiss.IndexFlatL2(embedding_size)\n",
"vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})"
],
"metadata": {
"id": "-3Ju2NJ5OIZO"
},
"execution_count": 11,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"langChain에서 돌아갈 핵심 chain 코드"
],
"metadata": {
"id": "nrvzubIFP6gd"
}
},
{
"cell_type": "code",
"source": [
"## 클래스 안에 prompt만 들어가 있다. 소스코드가 아님\n",
"## 사용자에게 input이 주어졌을 때, 문제 해결을 위한 task 단위로 쪼갤 때 쓰는 클래스\n",
"class TaskCreationChain(LLMChain):\n",
" \"\"\"Chain to generates tasks.\"\"\"\n",
"\n",
" @classmethod\n",
" def from_llm(cls, llm: BaseLLM, verbose: bool = True) -> LLMChain:\n",
" \"\"\"Get the response parser.\"\"\"\n",
" task_creation_template = (\n",
" \"You are a task creation AI that uses the result of an execution agent\"\n",
" \" to create new tasks with the following objective: {objective},\"\n",
" \" The last completed task has the result: {result}.\"\n",
" \" This result was based on this task description: {task_description}.\"\n",
" \" These are incomplete tasks: {incomplete_tasks}.\"\n",
" \" Based on the result, create new tasks to be completed\"\n",
" \" by the AI system that do not overlap with incomplete tasks.\"\n",
" \" Return the tasks as an array.\"\n",
" )\n",
" prompt = PromptTemplate(\n",
" template=task_creation_template,\n",
" input_variables=[\n",
" \"result\",\n",
" \"task_description\",\n",
" \"incomplete_tasks\",\n",
" \"objective\",\n",
" ],\n",
" )\n",
" return cls(prompt=prompt, llm=llm, verbose=verbose)"
],
"metadata": {
"id": "hJn115bfOJy0"
},
"execution_count": 12,
"outputs": []
},
{
"cell_type": "code",
"source": [
"## 만들어진 task를 보고 우선순위를 결정하는 클래스\n",
"## 실행 결과에 따라 우선순위가 바뀌어야 할 때\n",
"class TaskPrioritizationChain(LLMChain):\n",
" \"\"\"Chain to prioritize tasks.\"\"\"\n",
"\n",
" @classmethod\n",
" def from_llm(cls, llm: BaseLLM, verbose: bool = True) -> LLMChain:\n",
" \"\"\"Get the response parser.\"\"\"\n",
" task_prioritization_template = (\n",
" \"You are a task prioritization AI tasked with cleaning the formatting of and reprioritizing\"\n",
" \" the following tasks: {task_names}.\"\n",
" \" Consider the ultimate objective of your team: {objective}.\"\n",
" \" Do not remove any tasks. Return the result as a numbered list, like:\"\n",
" \" #. First task\"\n",
" \" #. Second task\"\n",
" \" Start the task list with number {next_task_id}.\"\n",
" )\n",
" prompt = PromptTemplate(\n",
" template=task_prioritization_template,\n",
" input_variables=[\"task_names\", \"next_task_id\", \"objective\"],\n",
" )\n",
" return cls(prompt=prompt, llm=llm, verbose=verbose)"
],
"metadata": {
"id": "y5AZATX4O0bg"
},
"execution_count": 13,
"outputs": []
},
{
"cell_type": "code",
"source": [
"## 실제 task를 실행하는 클래스\n",
"class ExecutionChain(LLMChain):\n",
" \"\"\"Chain to execute tasks.\"\"\"\n",
"\n",
" @classmethod\n",
" def from_llm(cls, llm: BaseLLM, verbose: bool = True) -> LLMChain:\n",
" \"\"\"Get the response parser.\"\"\"\n",
" execution_template = (\n",
" \"You are an AI who performs one task based on the following objective: {objective}.\"\n",
" \" Take into account these previously completed tasks: {context}.\"\n",
" \" Your task: {task}.\"\n",
" \" Response:\"\n",
" )\n",
" prompt = PromptTemplate(\n",
" template=execution_template,\n",
" input_variables=[\"objective\", \"context\", \"task\"],\n",
" )\n",
" return cls(prompt=prompt, llm=llm, verbose=verbose)"
],
"metadata": {
"id": "U-akEopwPnQG"
},
"execution_count": 14,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"chain 동작과 호출을 위해 필요한 함수를 정의한다"
],
"metadata": {
"id": "YgfS0kWLP-Qs"
}
},
{
"cell_type": "code",
"source": [
"## 다음 task 확인하기\n",
"def get_next_task(\n",
" task_creation_chain: LLMChain,\n",
" result: Dict,\n",
" task_description: str,\n",
" task_list: List[str],\n",
" objective: str,\n",
") -> List[Dict]:\n",
" \"\"\"Get the next task.\"\"\"\n",
" incomplete_tasks = \", \".join(task_list)\n",
" response = task_creation_chain.run(\n",
" result=result,\n",
" task_description=task_description,\n",
" incomplete_tasks=incomplete_tasks,\n",
" objective=objective,\n",
" )\n",
" new_tasks = response.split(\"\\n\")\n",
" return [{\"task_name\": task_name} for task_name in new_tasks if task_name.strip()]"
],
"metadata": {
"id": "9q2mC8ndP2v0"
},
"execution_count": 16,
"outputs": []
},
{
"cell_type": "code",
"source": [
"## task 우선순위 바꾸는 함수\n",
"def prioritize_tasks(\n",
" task_prioritization_chain: LLMChain,\n",
" this_task_id: int,\n",
" task_list: List[Dict],\n",
" objective: str,\n",
") -> List[Dict]:\n",
" \"\"\"Prioritize tasks.\"\"\"\n",
" task_names = [t[\"task_name\"] for t in task_list]\n",
" next_task_id = int(this_task_id) + 1\n",
" response = task_prioritization_chain.run(\n",
" task_names=task_names, next_task_id=next_task_id, objective=objective\n",
" )\n",
" new_tasks = response.split(\"\\n\")\n",
" prioritized_task_list = []\n",
" for task_string in new_tasks:\n",
" if not task_string.strip():\n",
" continue\n",
" task_parts = task_string.strip().split(\".\", 1)\n",
" if len(task_parts) == 2:\n",
" task_id = task_parts[0].strip()\n",
" task_name = task_parts[1].strip()\n",
" prioritized_task_list.append({\"task_id\": task_id, \"task_name\": task_name})\n",
" return prioritized_task_list"
],
"metadata": {
"id": "V6gdv-PpQEWN"
},
"execution_count": 17,
"outputs": []
},
{
"cell_type": "code",
"source": [
"## 먼저 실행해야 할 task 조회\n",
"def _get_top_tasks(vectorstore, query: str, k: int) -> List[str]:\n",
" \"\"\"Get the top k tasks based on the query.\"\"\"\n",
" results = vectorstore.similarity_search_with_score(query, k=k)\n",
" if not results:\n",
" return []\n",
" sorted_results, _ = zip(*sorted(results, key=lambda x: x[1], reverse=True))\n",
" return [str(item.metadata[\"task\"]) for item in sorted_results]\n",
"\n",
"## task 실행\n",
"def execute_task(\n",
" vectorstore, execution_chain: LLMChain, objective: str, task: str, k: int = 5\n",
") -> str:\n",
" \"\"\"Execute a task.\"\"\"\n",
" context = _get_top_tasks(vectorstore, query=objective, k=k)\n",
" return execution_chain.run(objective=objective, context=context, task=task)"
],
"metadata": {
"id": "PCm3H49HQST8"
},
"execution_count": 18,
"outputs": []
},
{
"cell_type": "code",
"source": [
"## 최종적으로 사용할 컴포넌트\n",
"\n",
"class BabyAGI(Chain, BaseModel):\n",
" \"\"\"Controller model for the BabyAGI agent.\"\"\"\n",
" \n",
" ## task list를 관리하는 것\n",
" task_list: deque = Field(default_factory=deque)\n",
" task_creation_chain: TaskCreationChain = Field(...)\n",
" task_prioritization_chain: TaskPrioritizationChain = Field(...)\n",
" execution_chain: ExecutionChain = Field(...)\n",
" task_id_counter: int = Field(1)\n",
" vectorstore: VectorStore = Field(init=False)\n",
" max_iterations: Optional[int] = None\n",
"\n",
" class Config:\n",
" \"\"\"Configuration for this pydantic object.\"\"\"\n",
"\n",
" arbitrary_types_allowed = True\n",
"\n",
" def add_task(self, task: Dict):\n",
" self.task_list.append(task)\n",
"\n",
" def print_task_list(self):\n",
" print(\"\\033[95m\\033[1m\" + \"\\n*****TASK LIST*****\\n\" + \"\\033[0m\\033[0m\")\n",
" for t in self.task_list:\n",
" print(str(t[\"task_id\"]) + \": \" + t[\"task_name\"])\n",
"\n",
" def print_next_task(self, task: Dict):\n",
" print(\"\\033[92m\\033[1m\" + \"\\n*****NEXT TASK*****\\n\" + \"\\033[0m\\033[0m\")\n",
" print(str(task[\"task_id\"]) + \": \" + task[\"task_name\"])\n",
"\n",
" def print_task_result(self, result: str):\n",
" print(\"\\033[93m\\033[1m\" + \"\\n*****TASK RESULT*****\\n\" + \"\\033[0m\\033[0m\")\n",
" print(result)\n",
"\n",
" @property\n",
" def input_keys(self) -> List[str]:\n",
" return [\"objective\"]\n",
"\n",
" @property\n",
" def output_keys(self) -> List[str]:\n",
" return []\n",
"\n",
" # 실제 task 실행하는 함수\n",
" def _call(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n",
" \"\"\"Run the agent.\"\"\"\n",
" objective = inputs[\"objective\"] # 목적 확인\n",
" first_task = inputs.get(\"first_task\", \"Make a todo list\") # 목적에 맞는, 초기 task 확인\n",
" self.add_task({\"task_id\": 1, \"task_name\": first_task})\n",
" num_iters = 0\n",
" \n",
" ## for문 돌면서\n",
" ## 실행, 우선순위 정렬, 새 task 생성.... 이 수행됨.\n",
" ## max_iteration 초과하면 while문 break로 빠져나옴\n",
" while True:\n",
" if self.task_list:\n",
" self.print_task_list()\n",
"\n",
" # Step 1: Pull the first task\n",
" task = self.task_list.popleft()\n",
" self.print_next_task(task)\n",
"\n",
" # Step 2: Execute the task\n",
" result = execute_task(\n",
" self.vectorstore, self.execution_chain, objective, task[\"task_name\"]\n",
" )\n",
" this_task_id = int(task[\"task_id\"])\n",
" self.print_task_result(result)\n",
"\n",
" # Step 3: Store the result in Pinecone\n",
" result_id = f\"result_{task['task_id']}\"\n",
" self.vectorstore.add_texts(\n",
" texts=[result],\n",
" metadatas=[{\"task\": task[\"task_name\"]}],\n",
" ids=[result_id],\n",
" )\n",
"\n",
" # Step 4: Create new tasks and reprioritize task list\n",
" new_tasks = get_next_task(\n",
" self.task_creation_chain,\n",
" result,\n",
" task[\"task_name\"],\n",
" [t[\"task_name\"] for t in self.task_list],\n",
" objective,\n",
" )\n",
" for new_task in new_tasks:\n",
" self.task_id_counter += 1\n",
" new_task.update({\"task_id\": self.task_id_counter})\n",
" self.add_task(new_task)\n",
" self.task_list = deque(\n",
" prioritize_tasks(\n",
" self.task_prioritization_chain,\n",
" this_task_id,\n",
" list(self.task_list),\n",
" objective,\n",
" )\n",
" )\n",
" num_iters += 1\n",
" if self.max_iterations is not None and num_iters == self.max_iterations:\n",
" print(\n",
" \"\\033[91m\\033[1m\" + \"\\n*****TASK ENDING*****\\n\" + \"\\033[0m\\033[0m\"\n",
" )\n",
" break\n",
" return {}\n",
"\n",
" @classmethod\n",
" def from_llm(\n",
" cls, llm: BaseLLM, vectorstore: VectorStore, verbose: bool = False, **kwargs\n",
" ) -> \"BabyAGI\":\n",
" \"\"\"Initialize the BabyAGI Controller.\"\"\"\n",
" task_creation_chain = TaskCreationChain.from_llm(llm, verbose=verbose)\n",
" task_prioritization_chain = TaskPrioritizationChain.from_llm(\n",
" llm, verbose=verbose\n",
" )\n",
" execution_chain = ExecutionChain.from_llm(llm, verbose=verbose)\n",
" return cls(\n",
" task_creation_chain=task_creation_chain,\n",
" task_prioritization_chain=task_prioritization_chain,\n",
" execution_chain=execution_chain,\n",
" vectorstore=vectorstore,\n",
" **kwargs,\n",
" )"
],
"metadata": {
"id": "AKVii-gSQbac"
},
"execution_count": 19,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"실행예시"
],
"metadata": {
"id": "boRfHw8-TK0F"
}
},
{
"cell_type": "code",
"source": [
"OBJECTIVE = \"Write a weather report for SF today\"\n",
"llm = OpenAI(temperature=0)\n",
"# Logging of LLMChains\n",
"verbose = False\n",
"# If None, will keep on going forever\n",
"max_iterations: Optional[int] = 3\n",
"baby_agi = BabyAGI.from_llm(\n",
" llm=llm, vectorstore=vectorstore, verbose=verbose, max_iterations=max_iterations\n",
")\n",
"baby_agi({\"objective\": OBJECTIVE})"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "tDtGxHwBTH__",
"outputId": "1f4e362a-b6fe-47aa-dda2-5b778ac60a4b"
},
"execution_count": 20,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\u001b[95m\u001b[1m\n",
"*****TASK LIST*****\n",
"\u001b[0m\u001b[0m\n",
"1: Make a todo list\n",
"\u001b[92m\u001b[1m\n",
"*****NEXT TASK*****\n",
"\u001b[0m\u001b[0m\n",
"1: Make a todo list\n",
"\u001b[93m\u001b[1m\n",
"*****TASK RESULT*****\n",
"\u001b[0m\u001b[0m\n",
"\n",
"\n",
"1. Check the current temperature in San Francisco\n",
"2. Check the forecast for the day\n",
"3. Check the humidity levels\n",
"4. Check the wind speed and direction\n",
"5. Check the UV index\n",
"6. Check the air quality\n",
"7. Check the sunrise and sunset times\n",
"8. Check for any weather alerts or warnings\n",
"9. Write the weather report\n",
"\u001b[95m\u001b[1m\n",
"*****TASK LIST*****\n",
"\u001b[0m\u001b[0m\n",
"2: Check the current temperature in San Francisco\n",
"3: Check the forecast for the day\n",
"4: Check the humidity levels\n",
"5: Check the wind speed and direction\n",
"6: Check the UV index\n",
"7: Check the air quality\n",
"8: Check for any weather alerts or warnings\n",
"9: Check the sunrise and sunset times\n",
"10: Research the historical weather data for San Francisco\n",
"11: Research the average temperature for San Francisco in the current month\n",
"12: Research the average temperature for San Francisco in the previous month\n",
"13: Research the average temperature for San Francisco in the next month\n",
"14: Research the average precipitation for San Francisco in the current month\n",
"15: Research the average precipitation for San Francisco in the previous month\n",
"16: Research the average precipitation for San Francisco in the next month\n",
"17: Research the average wind speed for San Francisco in the current month\n",
"18: Research the average wind speed for San Francisco in the previous month\n",
"19: Research the average wind speed for San Francisco in the next month\n",
"20: Research the average humidity for San Francisco in the current month\n",
"21: Research the average humidity for San Francisco in the previous month\n",
"\u001b[92m\u001b[1m\n",
"*****NEXT TASK*****\n",
"\u001b[0m\u001b[0m\n",
"2: Check the current temperature in San Francisco\n",
"\u001b[93m\u001b[1m\n",
"*****TASK RESULT*****\n",
"\u001b[0m\u001b[0m\n",
"\n",
"\n",
"The current temperature in San Francisco is 68°F (20°C).\n",
"\u001b[95m\u001b[1m\n",
"*****TASK LIST*****\n",
"\u001b[0m\u001b[0m\n",
"3: Check the forecast for the day\n",
"4: Check the humidity levels\n",
"5: Check the wind speed and direction\n",
"6: Check the UV index\n",
"7: Check the air quality\n",
"8: Check for any weather alerts or warnings\n",
"9: Check the sunrise and sunset times\n",
"10: Research the historical weather data for San Francisco\n",
"11: Research the average temperature for San Francisco in the current month\n",
"12: Research the average temperature for San Francisco in the previous month\n",
"13: Research the average temperature for San Francisco in the next month\n",
"14: Research the average precipitation for San Francisco in the current month\n",
"15: Research the average precipitation for San Francisco in the previous month\n",
"16: Research the average precipitation for San Francisco in the next month\n",
"17: Research the average wind speed for San Francisco in the current month\n",
"18: Research the average wind speed for San Francisco in the previous month\n",
"19: Research the average wind speed for San Francisco in the next month\n",
"20: Research the average humidity for San Francisco in the current month\n",
"21: Research the average humidity for San Francisco in the previous month\n",
"22: Compare the current temperature in San Francisco to the historical average temperature for the same day.\n",
"23: Compare the current temperature in San\n",
"\u001b[92m\u001b[1m\n",
"*****NEXT TASK*****\n",
"\u001b[0m\u001b[0m\n",
"3: Check the forecast for the day\n",
"\u001b[93m\u001b[1m\n",
"*****TASK RESULT*****\n",
"\u001b[0m\u001b[0m\n",
"\n",
"\n",
"I will check the forecast for San Francisco today. According to the National Weather Service, the forecast for today is mostly sunny with a high of 68 degrees Fahrenheit and a low of 54 degrees Fahrenheit. There is a slight chance of showers in the evening.\n",
"\u001b[91m\u001b[1m\n",
"*****TASK ENDING*****\n",
"\u001b[0m\u001b[0m\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"{'objective': 'Write a weather report for SF today'}"
]
},
"metadata": {},
"execution_count": 20
}
]
},
{
"cell_type": "markdown",
"source": [
"### 개인비서 - 사용자 정의 에이전트와 플러그인 검색\n",
"\n",
"- api들 중, ai plugin을 지정해둔 api들이 있음. 이걸 받아서 쓰는 것.\n",
"\n",
"---\n",
"\n",
"AI 플러그인과 상호 작용할 수 있는 사용자 정의 에이전트를 구축하기 위해 두 가지 개념을 결합합니다:\n",
"\n",
"사용자 정의 에이전트와 검색: 이것은 임의의 많은 플러그인을 사용하여 작업을 시도할 때 유용한 많은 도구를 검색하는 개념을 소개합니다.\n",
"\n",
"자연어 API 체인: 이것은 OpenAPI 엔드포인트 주변에 자연어 래퍼를 만듭니다. 이것이 유용한 이유는 (1) 플러그인이 내부적으로 OpenAPI 엔드포인트를 사용하고, (2) NLAChain에 래핑하면 라우터 에이전트가 더 쉽게 호출할 수 있기 때문입니다.\n",
"\n",
"이 노트북에서 도입된 새로운 아이디어는 도구를 명시적으로 선택하는 것이 아니라 사용할 OpenAPI 사양을 선택하는 검색의 개념입니다. 그런 다음 해당 OpenAPI 사양에서 도구를 생성할 수 있습니다. 이것이 에이전트가 플러그인을 사용하도록 하려는 경우 사용 사례입니다. 플러그인을 먼저 선택한 다음 엔드포인트를 선택하는 것이 엔드포인트를 직접 선택하는 것보다 효율적일 수 있습니다. 이는 플러그인이 선택에 대한 더 유용한 정보를 포함할 수 있기 때문입니다.\n",
"\n",
"\n"
],
"metadata": {
"id": "vB4aJtCDTbto"
}
},
{
"cell_type": "code",
"source": [
"from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser\n",
"from langchain.prompts import StringPromptTemplate\n",
"from langchain import OpenAI, SerpAPIWrapper, LLMChain\n",
"from typing import List, Union\n",
"from langchain.schema import AgentAction, AgentFinish\n",
"from langchain.agents.agent_toolkits import NLAToolkit\n",
"from langchain.tools.plugin import AIPlugin\n",
"import re"
],
"metadata": {
"id": "HBlhaZHjTMQm"
},
"execution_count": 21,
"outputs": []
},
{
"cell_type": "code",
"source": [
"llm = OpenAI(temperature=0)"
],
"metadata": {
"id": "SWJGH6isTmOa"
},
"execution_count": 22,
"outputs": []
},
{
"cell_type": "code",
"source": [
"urls = [\n",
" \"https://datasette.io/.well-known/ai-plugin.json\",\n",
" \"https://api.speak.com/.well-known/ai-plugin.json\",\n",
" \"https://www.wolframalpha.com/.well-known/ai-plugin.json\",\n",
" \"https://www.zapier.com/.well-known/ai-plugin.json\",\n",
" \"https://www.klarna.com/.well-known/ai-plugin.json\",\n",
" \"https://www.joinmilo.com/.well-known/ai-plugin.json\",\n",
" \"https://slack.com/.well-known/ai-plugin.json\",\n",
" \"https://schooldigger.com/.well-known/ai-plugin.json\",\n",
"]\n",
"\n",
"AI_PLUGINS = [AIPlugin.from_url(url) for url in urls]"
],
"metadata": {
"id": "-tg5vvtATohK"
},
"execution_count": 23,
"outputs": []
},
{
"cell_type": "code",
"source": [
"from langchain.vectorstores import FAISS\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.schema import Document"
],
"metadata": {
"id": "4WSGvyvrTovv"
},
"execution_count": 24,
"outputs": []
},
{
"cell_type": "code",
"source": [
"embeddings = OpenAIEmbeddings()\n",
"docs = [\n",
" Document(page_content=plugin.description_for_model, \n",
" metadata={\"plugin_name\": plugin.name_for_model}\n",
" )\n",
" for plugin in AI_PLUGINS\n",
"]\n",
"vector_store = FAISS.from_documents(docs, embeddings)\n",
"toolkits_dict = {plugin.name_for_model: \n",
" NLAToolkit.from_llm_and_ai_plugin(llm, plugin) \n",
" for plugin in AI_PLUGINS}"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "BzRXNO3VUBv-",
"outputId": "403d8cab-305e-4541-c1b6-aac8011d026b"
},
"execution_count": 25,
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
"WARNING:langchain.tools.openapi.utils.openapi_utils:Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"WARNING:langchain.tools.openapi.utils.openapi_utils:Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"WARNING:langchain.tools.openapi.utils.openapi_utils:Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"WARNING:langchain.tools.openapi.utils.openapi_utils:Attempting to load an OpenAPI 3.0.2 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"WARNING:langchain.tools.openapi.utils.openapi_utils:Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"WARNING:langchain.tools.openapi.utils.openapi_utils:Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"WARNING:langchain.tools.openapi.utils.openapi_utils:Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"WARNING:langchain.tools.openapi.utils.openapi_utils:Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"WARNING:langchain.tools.openapi.utils.openapi_utils:Attempting to load a Swagger 2.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"LangChain 프레임워크에서 제공하는 tool만 쓰는 게 아니라, ai plugin이 만들어져 있는 api를 tool로 사용하려는 것."
],
"metadata": {
"id": "c2KgcvKyUKjI"
}
},
{
"cell_type": "code",
"source": [
"retriever = vector_store.as_retriever()\n",
"\n",
"def get_tools(query):\n",
" # Get documents, which contain the Plugins to use\n",
" docs = retriever.get_relevant_documents(query)\n",
" # Get the toolkits, one for each plugin\n",
" tool_kits = [toolkits_dict[d.metadata[\"plugin_name\"]] for d in docs]\n",
" # Get the tools: a separate NLAChain for each endpoint\n",
" tools = []\n",
" for tk in tool_kits:\n",
" tools.extend(tk.nla_tools)\n",
" return tools"
],
"metadata": {
"id": "tHG27QSNUDLl"
},
"execution_count": 26,
"outputs": []
},
{
"cell_type": "code",
"source": [
"tools = get_tools(\"What could I do today with my kiddo\")\n",
"[t.name for t in tools] ## 사용할 수 있는 api tool이 ai plugin으로 확장됨"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "gKYJCARTUUOd",
"outputId": "15535dd3-4c2d-4fd5-f8e1-f88d9cc7c5d6"
},
"execution_count": 27,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"['Milo.askMilo',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.search_all_actions',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.preview_a_zap',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.get_configuration_link',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.list_exposed_actions',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.get_execution_log_endpoint',\n",
" 'SchoolDigger_API_V2.0.Autocomplete_GetSchools',\n",
" 'SchoolDigger_API_V2.0.Districts_GetAllDistricts2',\n",
" 'SchoolDigger_API_V2.0.Districts_GetDistrict2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetSchoolRank2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetRank_District',\n",
" 'SchoolDigger_API_V2.0.Schools_GetAllSchools20',\n",
" 'SchoolDigger_API_V2.0.Schools_GetSchool20',\n",
" 'Speak.translate',\n",
" 'Speak.explainPhrase',\n",
" 'Speak.explainTask']"
]
},
"metadata": {},
"execution_count": 27
}
]
},
{
"cell_type": "code",
"source": [
"tools = get_tools(\"what shirts can i buy?\")\n",
"[t.name for t in tools]"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "0E4lOyNEUd6Y",
"outputId": "59e7332f-c835-4d7f-9cc3-e8453d6e3f4d"
},
"execution_count": 28,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"['Open_AI_Klarna_product_Api.productsUsingGET',\n",
" 'Milo.askMilo',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.search_all_actions',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.preview_a_zap',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.get_configuration_link',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.list_exposed_actions',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.get_execution_log_endpoint',\n",
" 'SchoolDigger_API_V2.0.Autocomplete_GetSchools',\n",
" 'SchoolDigger_API_V2.0.Districts_GetAllDistricts2',\n",
" 'SchoolDigger_API_V2.0.Districts_GetDistrict2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetSchoolRank2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetRank_District',\n",
" 'SchoolDigger_API_V2.0.Schools_GetAllSchools20',\n",
" 'SchoolDigger_API_V2.0.Schools_GetSchool20']"
]
},
"metadata": {},
"execution_count": 28
}
]
},
{
"cell_type": "code",
"source": [
"# Set up the base template\n",
"template = \"\"\"Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\n",
"\n",
"{tools}\n",
"\n",
"Use the following format:\n",
"\n",
"Question: the input question you must answer\n",
"Thought: you should always think about what to do\n",
"Action: the action to take, should be one of [{tool_names}]\n",
"Action Input: the input to the action\n",
"Observation: the result of the action\n",
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
"Thought: I now know the final answer\n",
"Final Answer: the final answer to the original input question\n",
"\n",
"Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Arg\"s\n",
"\n",
"Question: {input}\n",
"{agent_scratchpad}\"\"\""
],
"metadata": {
"id": "vLe1xv1LUlwo"
},
"execution_count": 29,
"outputs": []
},
{
"cell_type": "code",
"source": [
"from typing import Callable\n",
"# Set up a prompt template\n",
"class CustomPromptTemplate(StringPromptTemplate):\n",
" # The template to use\n",
" template: str\n",
" ############## NEW ######################\n",
" # The list of tools available\n",
" tools_getter: Callable\n",
" \n",
" def format(self, **kwargs) -> str:\n",
" # Get the intermediate steps (AgentAction, Observation tuples)\n",
" # Format them in a particular way\n",
" intermediate_steps = kwargs.pop(\"intermediate_steps\")\n",
" thoughts = \"\"\n",
" for action, observation in intermediate_steps:\n",
" thoughts += action.log\n",
" thoughts += f\"\\nObservation: {observation}\\nThought: \"\n",
" # Set the agent_scratchpad variable to that value\n",
" kwargs[\"agent_scratchpad\"] = thoughts\n",
" ############## NEW ######################\n",
" tools = self.tools_getter(kwargs[\"input\"])\n",
" # Create a tools variable from the list of tools provided\n",
" kwargs[\"tools\"] = \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])\n",
" # Create a list of tool names for the tools provided\n",
" kwargs[\"tool_names\"] = \", \".join([tool.name for tool in tools])\n",
" return self.template.format(**kwargs)"
],
"metadata": {
"id": "okH4g2SbUx9p"
},
"execution_count": 30,
"outputs": []
},
{
"cell_type": "code",
"source": [
"prompt = CustomPromptTemplate(\n",
" template=template,\n",
" tools_getter=get_tools,\n",
" # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically\n",
" # This includes the `intermediate_steps` variable because that is needed\n",
" input_variables=[\"input\", \"intermediate_steps\"]\n",
")"
],
"metadata": {
"id": "i3A5_OexUyPj"
},
"execution_count": 31,
"outputs": []
},
{
"cell_type": "code",
"source": [
"class CustomOutputParser(AgentOutputParser):\n",
" \n",
" def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:\n",
" # Check if agent should finish\n",
" if \"Final Answer:\" in llm_output:\n",
" return AgentFinish(\n",
" # Return values is generally always a dictionary with a single `output` key\n",
" # It is not recommended to try anything else at the moment :)\n",
" return_values={\"output\": llm_output.split(\"Final Answer:\")[-1].strip()},\n",
" log=llm_output,\n",
" )\n",
" # Parse out the action and action input\n",
" regex = r\"Action\\s*\\d*\\s*:(.*?)\\nAction\\s*\\d*\\s*Input\\s*\\d*\\s*:[\\s]*(.*)\"\n",
" match = re.search(regex, llm_output, re.DOTALL)\n",
" if not match:\n",
" raise ValueError(f\"Could not parse LLM output: `{llm_output}`\")\n",
" action = match.group(1).strip()\n",
" action_input = match.group(2)\n",
" # Return the action and action input\n",
" return AgentAction(tool=action, tool_input=action_input.strip(\" \").strip('\"'), log=llm_output)"
],
"metadata": {
"id": "NJc3ZLqwUzf-"
},
"execution_count": 32,
"outputs": []
},
{
"cell_type": "code",
"source": [
"output_parser = CustomOutputParser()"
],
"metadata": {
"id": "H3wdlSTLU07h"
},
"execution_count": 33,
"outputs": []
},
{
"cell_type": "code",
"source": [
"llm = OpenAI(temperature=0)\n",
"\n",
"# LLM chain consisting of the LLM and a prompt\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
"\n",
"tool_names = [tool.name for tool in tools]\n",
"agent = LLMSingleActionAgent(\n",
" llm_chain=llm_chain, \n",
" output_parser=output_parser,\n",
" stop=[\"\\nObservation:\"], \n",
" allowed_tools=tool_names\n",
")"
],
"metadata": {
"id": "W5MNMRWeU2F9"
},
"execution_count": 34,
"outputs": []
},
{
"cell_type": "code",
"source": [
"agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)\n",
"agent_executor.run(\"what shirts can i buy?\")"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 250
},
"id": "9b65q007U5Bd",
"outputId": "0437b3c8-7165-48c9-b191-9fea0086aaa0"
},
"execution_count": 35,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find a way to get product information\n",
"Action: Open_AI_Klarna_product_Api.productsUsingGET\n",
"Action Input: shirts\u001b[0m\n",
"\n",
"Observation:\u001b[36;1m\u001b[1;3mI found 10 shirts from the API response. They range in price from $19.99 to $450.00 and come in a variety of colors, materials, and patterns.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I now know what shirts I can buy\n",
"Final Answer: Arg, I found 10 shirts from the API response. They range in price from $19.99 to $450.00 and come in a variety of colors, materials, and patterns.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"'Arg, I found 10 shirts from the API response. They range in price from $19.99 to $450.00 and come in a variety of colors, materials, and patterns.'"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 35
}
]
},
{
"cell_type": "markdown",
"source": [
"---"
],
"metadata": {
"id": "N4dBTpT0U_8e"
}
},
{
"cell_type": "markdown",
"source": [
"### 개인비서 - LangChain에서 생성 에이전트\n",
"\n",
"\n",
"이 노트북은 Park 등이 작성한 논문 Generative Agents: Interactive Simulacra of Human Behavior를 기반으로 하는 생성 에이전트를 구현합니다. 여기에서는 LangChain 검색기를 기반으로 한 시간 가중치가 있는 Memory 객체를 활용합니다.\n",
"\n",
"- 자기들끼리 대화하는 프로그램\n",
"\n",
"캐릭터를 정의한다.\n",
"- 메모리를 정의한다. 메모리: 해당 캐릭터에 경험 / 기억을 저장하는 장소\n",
"- 캐릭터는 자신에게 저장된 메모리를 토대로 답변하는 것."
],
"metadata": {
"id": "Rz02H5SoVCsT"
}
},
{
"cell_type": "code",
"source": [
"# Use termcolor to make it easy to colorize the outputs.\n",
"!pip install termcolor > /dev/null"
],
"metadata": {
"id": "nbHbm63uU5Oc"
},
"execution_count": 1,
"outputs": []
},
{
"cell_type": "code",
"source": [
"import re\n",
"from datetime import datetime, timedelta\n",
"from typing import List, Optional, Tuple\n",
"from termcolor import colored\n",
"\n",
"from pydantic import BaseModel, Field\n",
"\n",
"from langchain import LLMChain\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.docstore import InMemoryDocstore\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.retrievers import TimeWeightedVectorStoreRetriever\n",
"from langchain.schema import BaseLanguageModel, Document\n",
"from langchain.vectorstores import FAISS\n"
],
"metadata": {
"id": "3Um8Z2NCVIvc"
},
"execution_count": 4,
"outputs": []
},
{
"cell_type": "code",
"source": [
"USER_NAME = \"Person A\" # The name you want to use when interviewing the agent.\n",
"LLM = ChatOpenAI(max_tokens=1500) # Can be any LLM you want.\n"
],
"metadata": {
"id": "MDYcMmHUVKjk"
},
"execution_count": 8,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"_parse_list: 줄바꿈 문자로 구분된 문자열을 리스트로 변환해주는 메서드입니다.\n",
"\n",
"_compute_agent_summary: 현재 Agent의 핵심 특성을 요약하여 반환하는 메서드입니다. 이 메서드는 '관련된 기억'을 얻어오고, 이를 LLMChain을 사용하여 요약 문자열을 생성합니다.\n",
"\n",
"_get_topics_of_reflection: 최근 관측에 대해 가장 중요한 세 가지 질문을 반환하는 메서드입니다. 이 메서드는 '관련된 기억'을 가져와 LLMChain을 사용하여 적절한 질문을 생성합니다.\n",
"\n",
"_get_insights_on_topic: LLMChain을 사용하여 지정된 주제에 대한 '인사이트'를 생성하는 메서드입니다.\n",
"\n",
"pause_to_reflect: 최근 관측에 대해 reflection_threshold로 지정된 값을 초과하면, 이 메서드를 호출하여 다시 생각하고 '인사이트'를 생성합니다.\n",
"\n",
"_score_memory_importance: 주어진 기억이 얼마나 중요한지 점수를 매겨 반환하는 메서드입니다.\n",
"\n",
"add_memory: 관찰 또는 기억을 Agent의 기억으로 추가하고, reflection_threshold에 도달하면 pause_to_reflect 메서드를 호출합니다.\n",
"\n",
"fetch_memories: 지정된 기억과 관련된 기억을 가져오는 메서드입니다.\n",
"\n",
"get_summary: Agent의 요약 문자열을 반환하는 메서드입니다.\n",
"\n",
"get_full_header: Agent의 요약 문자열, 상태 및 현재 시간을 포함한 전체 헤더 문자열을 반환하는 메서드입니다.\n",
"\n",
"_get_entity_from_observation: 관찰에 대한 관측체를 반환하는 메서드입니다.\n",
"\n",
"_get_entity_action: 관찰에 대한 관측체와 관련된 동작을 반환하는 메서드입니다.\n",
"\n",
"_format_memories_to_summarize: 요약할 관련 기억들의 형식을 변환하는 메서드입니다.\n",
"\n",
"summarize_related_memories: 관측과 관련된 가장 관련성 높은 기억들을 요약하여 반환하는 메서드입니다.\n",
"\n",
"_get_memories_until_limit: 최대 토큰 제한에 도달할 때까지 기억을 가져오는 메서드입니다.\n",
"\n",
"_generate_reaction: 관찰에 대한 응답 문자열을 생성하는 메서드입니다.\n",
"\n",
"generate_reaction: 관찰에 대한 응답을 생성하고, 'REACT' 또는 'SAY'를 반환합니다.\n",
"\n",
"generate_dialogue_response: 관찰에 대한 대화 응답을 생성하고, 'GOODBYE' 또는 'SAY'"
],
"metadata": {
"id": "hxCt7fP6VOxQ"
}
},
{
"cell_type": "code",
"source": [
"class GenerativeAgent(BaseModel):\n",
" \"\"\"A character with memory and innate characteristics.\"\"\"\n",
" \n",
" name: str\n",
" age: int\n",
" traits: str\n",
" \"\"\"The traits of the character you wish not to change.\"\"\"\n",
" status: str\n",
" \"\"\"Current activities of the character.\"\"\"\n",
" llm: BaseLanguageModel\n",
" memory_retriever: TimeWeightedVectorStoreRetriever\n",
" \"\"\"The retriever to fetch related memories.\"\"\"\n",
" verbose: bool = False\n",
" \n",
" reflection_threshold: Optional[float] = None\n",
" \"\"\"When the total 'importance' of memories exceeds the above threshold, stop to reflect.\"\"\"\n",
" \n",
" current_plan: List[str] = []\n",
" \"\"\"The current plan of the agent.\"\"\"\n",
" \n",
" summary: str = \"\" #: :meta private:\n",
" summary_refresh_seconds: int= 3600 #: :meta private:\n",
" last_refreshed: datetime =Field(default_factory=datetime.now) #: :meta private:\n",
" daily_summaries: List[str] #: :meta private:\n",
" memory_importance: float = 0.0 #: :meta private:\n",
" max_tokens_limit: int = 1200 #: :meta private:\n",
" \n",
" class Config:\n",
" \"\"\"Configuration for this pydantic object.\"\"\"\n",
"\n",
" arbitrary_types_allowed = True\n",
"\n",
" @staticmethod\n",
" def _parse_list(text: str) -> List[str]:\n",
" \"\"\"Parse a newline-separated string into a list of strings.\"\"\"\n",
" lines = re.split(r'\\n', text.strip())\n",
" return [re.sub(r'^\\s*\\d+\\.\\s*', '', line).strip() for line in lines]\n",
"\n",
" def _compute_agent_summary(self):\n",
" \"\"\"\"\"\"\n",
" prompt = PromptTemplate.from_template(\n",
" \"How would you summarize {name}'s core characteristics given the\"\n",
" +\" following statements:\\n\"\n",
" +\"{related_memories}\"\n",
" + \"Do not embellish.\"\n",
" +\"\\n\\nSummary: \"\n",
" )\n",
" # The agent seeks to think about their core characteristics.\n",
" relevant_memories = self.fetch_memories(f\"{self.name}'s core characteristics\")\n",
" relevant_memories_str = \"\\n\".join([f\"{mem.page_content}\" for mem in relevant_memories])\n",
" chain = LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)\n",
" return chain.run(name=self.name, related_memories=relevant_memories_str).strip()\n",
" \n",
" def _get_topics_of_reflection(self, last_k: int = 50) -> Tuple[str, str, str]:\n",
" \"\"\"Return the 3 most salient high-level questions about recent observations.\"\"\"\n",
" prompt = PromptTemplate.from_template(\n",
" \"{observations}\\n\\n\"\n",
" + \"Given only the information above, what are the 3 most salient\"\n",
" + \" high-level questions we can answer about the subjects in the statements?\"\n",
" + \" Provide each question on a new line.\\n\\n\"\n",
" )\n",
" reflection_chain = LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)\n",
" observations = self.memory_retriever.memory_stream[-last_k:]\n",
" observation_str = \"\\n\".join([o.page_content for o in observations])\n",
" result = reflection_chain.run(observations=observation_str)\n",
" return self._parse_list(result)\n",
" \n",
" def _get_insights_on_topic(self, topic: str) -> List[str]:\n",
" \"\"\"Generate 'insights' on a topic of reflection, based on pertinent memories.\"\"\"\n",
" prompt = PromptTemplate.from_template(\n",
" \"Statements about {topic}\\n\"\n",
" +\"{related_statements}\\n\\n\"\n",
" + \"What 5 high-level insights can you infer from the above statements?\"\n",
" + \" (example format: insight (because of 1, 5, 3))\"\n",
" )\n",
" related_memories = self.fetch_memories(topic)\n",
" related_statements = \"\\n\".join([f\"{i+1}. {memory.page_content}\" \n",
" for i, memory in \n",
" enumerate(related_memories)])\n",
" reflection_chain = LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)\n",
" result = reflection_chain.run(topic=topic, related_statements=related_statements)\n",
" # TODO: Parse the connections between memories and insights\n",
" return self._parse_list(result)\n",
" \n",
" def pause_to_reflect(self) -> List[str]:\n",
" \"\"\"Reflect on recent observations and generate 'insights'.\"\"\"\n",
" print(colored(f\"Character {self.name} is reflecting\", \"blue\"))\n",
" new_insights = []\n",
" topics = self._get_topics_of_reflection()\n",
" for topic in topics:\n",
" insights = self._get_insights_on_topic( topic)\n",
" for insight in insights:\n",
" self.add_memory(insight)\n",
" new_insights.extend(insights)\n",
" return new_insights\n",
" \n",
" ## 가중치 기반으로 메모리에 저장\n",
" def _score_memory_importance(self, memory_content: str, weight: float = 0.15) -> float:\n",
" \"\"\"Score the absolute importance of the given memory.\"\"\"\n",
" # A weight of 0.25 makes this less important than it\n",
" # would be otherwise, relative to salience and time\n",
" prompt = PromptTemplate.from_template(\n",
" \"On the scale of 1 to 10, where 1 is purely mundane\"\n",
" +\" (e.g., brushing teeth, making bed) and 10 is\"\n",
" + \" extremely poignant (e.g., a break up, college\"\n",
" + \" acceptance), rate the likely poignancy of the\"\n",
" + \" following piece of memory. Respond with a single integer.\"\n",
" + \"\\nMemory: {memory_content}\"\n",
" + \"\\nRating: \"\n",
" )\n",
" chain = LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)\n",
" score = chain.run(memory_content=memory_content).strip()\n",
" match = re.search(r\"^\\D*(\\d+)\", score)\n",
" if match:\n",
" return (float(score[0]) / 10) * weight\n",
" else:\n",
" return 0.0\n",
"\n",
" def add_memory(self, memory_content: str) -> List[str]:\n",
" \"\"\"Add an observation or memory to the agent's memory.\"\"\"\n",
" importance_score = self._score_memory_importance(memory_content)\n",
" self.memory_importance += importance_score\n",
" document = Document(page_content=memory_content, metadata={\"importance\": importance_score})\n",
" result = self.memory_retriever.add_documents([document])\n",
"\n",
" # After an agent has processed a certain amount of memories (as measured by\n",
" # aggregate importance), it is time to reflect on recent events to add\n",
" # more synthesized memories to the agent's memory stream.\n",
" if (self.reflection_threshold is not None \n",
" and self.memory_importance > self.reflection_threshold\n",
" and self.status != \"Reflecting\"):\n",
" old_status = self.status\n",
" self.status = \"Reflecting\"\n",
" self.pause_to_reflect()\n",
" # Hack to clear the importance from reflection\n",
" self.memory_importance = 0.0\n",
" self.status = old_status\n",
" return result\n",
" \n",
" def fetch_memories(self, observation: str) -> List[Document]:\n",
" \"\"\"Fetch related memories.\"\"\"\n",
" return self.memory_retriever.get_relevant_documents(observation)\n",
" \n",
" def get_summary(self, force_refresh: bool = False) -> str:\n",
" \"\"\"Return a descriptive summary of the agent.\"\"\"\n",
" current_time = datetime.now()\n",
" since_refresh = (current_time - self.last_refreshed).seconds\n",
" if not self.summary or since_refresh >= self.summary_refresh_seconds or force_refresh:\n",
" self.summary = self._compute_agent_summary()\n",
" self.last_refreshed = current_time\n",
" return (\n",
" f\"Name: {self.name} (age: {self.age})\"\n",
" +f\"\\nInnate traits: {self.traits}\"\n",
" +f\"\\n{self.summary}\"\n",
" )\n",
" \n",
" def get_full_header(self, force_refresh: bool = False) -> str:\n",
" \"\"\"Return a full header of the agent's status, summary, and current time.\"\"\"\n",
" summary = self.get_summary(force_refresh=force_refresh)\n",
" current_time_str = datetime.now().strftime(\"%B %d, %Y, %I:%M %p\")\n",
" return f\"{summary}\\nIt is {current_time_str}.\\n{self.name}'s status: {self.status}\"\n",
" \n",
" def _get_entity_from_observation(self, observation: str) -> str:\n",
" prompt = PromptTemplate.from_template(\n",
" \"What is the observed entity in the following observation? {observation}\"\n",
" +\"\\nEntity=\"\n",
" )\n",
" chain = LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)\n",
" return chain.run(observation=observation).strip()\n",
"\n",
" def _get_entity_action(self, observation: str, entity_name: str) -> str:\n",
" prompt = PromptTemplate.from_template(\n",
" \"What is the {entity} doing in the following observation? {observation}\"\n",
" +\"\\nThe {entity} is\"\n",
" )\n",
" chain = LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)\n",
" return chain.run(entity=entity_name, observation=observation).strip()\n",
" \n",
" def _format_memories_to_summarize(self, relevant_memories: List[Document]) -> str:\n",
" content_strs = set()\n",
" content = []\n",
" for mem in relevant_memories:\n",
" if mem.page_content in content_strs:\n",
" continue\n",
" content_strs.add(mem.page_content)\n",
" created_time = mem.metadata[\"created_at\"].strftime(\"%B %d, %Y, %I:%M %p\")\n",
" content.append(f\"- {created_time}: {mem.page_content.strip()}\")\n",
" return \"\\n\".join([f\"{mem}\" for mem in content])\n",
" \n",
" def summarize_related_memories(self, observation: str) -> str:\n",
" \"\"\"Summarize memories that are most relevant to an observation.\"\"\"\n",
" entity_name = self._get_entity_from_observation(observation)\n",
" entity_action = self._get_entity_action(observation, entity_name)\n",
" q1 = f\"What is the relationship between {self.name} and {entity_name}\"\n",
" relevant_memories = self.fetch_memories(q1) # Fetch memories related to the agent's relationship with the entity\n",
" q2 = f\"{entity_name} is {entity_action}\"\n",
" relevant_memories += self.fetch_memories(q2) # Fetch things related to the entity-action pair\n",
" context_str = self._format_memories_to_summarize(relevant_memories)\n",
" prompt = PromptTemplate.from_template(\n",
" \"{q1}?\\nContext from memory:\\n{context_str}\\nRelevant context: \"\n",
" )\n",
" chain = LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)\n",
" return chain.run(q1=q1, context_str=context_str.strip()).strip()\n",
" \n",
" def _get_memories_until_limit(self, consumed_tokens: int) -> str:\n",
" \"\"\"Reduce the number of tokens in the documents.\"\"\"\n",
" result = []\n",
" for doc in self.memory_retriever.memory_stream[::-1]:\n",
" if consumed_tokens >= self.max_tokens_limit:\n",
" break\n",
" consumed_tokens += self.llm.get_num_tokens(doc.page_content)\n",
" if consumed_tokens < self.max_tokens_limit:\n",
" result.append(doc.page_content) \n",
" return \"; \".join(result[::-1])\n",
" \n",
" ## 정보를 받았을 때, 어떤 리액션을 할 것인지\n",
" def _generate_reaction(\n",
" self,\n",
" observation: str,\n",
" suffix: str\n",
" ) -> str:\n",
" \"\"\"React to a given observation.\"\"\"\n",
" prompt = PromptTemplate.from_template(\n",
" \"{agent_summary_description}\"\n",
" +\"\\nIt is {current_time}.\"\n",
" +\"\\n{agent_name}'s status: {agent_status}\"\n",
" + \"\\nSummary of relevant context from {agent_name}'s memory:\"\n",
" +\"\\n{relevant_memories}\"\n",
" +\"\\nMost recent observations: {recent_observations}\"\n",
" + \"\\nObservation: {observation}\"\n",
" + \"\\n\\n\" + suffix\n",
" )\n",
" agent_summary_description = self.get_summary()\n",
" relevant_memories_str = self.summarize_related_memories(observation)\n",
" current_time_str = datetime.now().strftime(\"%B %d, %Y, %I:%M %p\")\n",
" kwargs = dict(agent_summary_description=agent_summary_description,\n",
" current_time=current_time_str,\n",
" relevant_memories=relevant_memories_str,\n",
" agent_name=self.name,\n",
" observation=observation,\n",
" agent_status=self.status)\n",
" consumed_tokens = self.llm.get_num_tokens(prompt.format(recent_observations=\"\", **kwargs))\n",
" kwargs[\"recent_observations\"] = self._get_memories_until_limit(consumed_tokens)\n",
" action_prediction_chain = LLMChain(llm=self.llm, prompt=prompt)\n",
" result = action_prediction_chain.run(**kwargs)\n",
" return result.strip()\n",
" \n",
" def generate_reaction(self, observation: str) -> Tuple[bool, str]:\n",
" \"\"\"React to a given observation.\"\"\"\n",
" call_to_action_template = (\n",
" \"Should {agent_name} react to the observation, and if so,\"\n",
" +\" what would be an appropriate reaction? Respond in one line.\"\n",
" +' If the action is to engage in dialogue, write:\\nSAY: \"what to say\"'\n",
" +\"\\notherwise, write:\\nREACT: {agent_name}'s reaction (if anything).\"\n",
" + \"\\nEither do nothing, react, or say something but not both.\\n\\n\"\n",
" )\n",
" full_result = self._generate_reaction(observation, call_to_action_template)\n",
" result = full_result.strip().split('\\n')[0]\n",
" self.add_memory(f\"{self.name} observed {observation} and reacted by {result}\")\n",
" if \"REACT:\" in result:\n",
" reaction = result.split(\"REACT:\")[-1].strip()\n",
" return False, f\"{self.name} {reaction}\"\n",
" if \"SAY:\" in result:\n",
" said_value = result.split(\"SAY:\")[-1].strip()\n",
" return True, f\"{self.name} said {said_value}\"\n",
" else:\n",
" return False, result\n",
"\n",
" def generate_dialogue_response(self, observation: str) -> Tuple[bool, str]:\n",
" \"\"\"React to a given observation.\"\"\"\n",
" call_to_action_template = (\n",
" 'What would {agent_name} say? To end the conversation, write: GOODBYE: \"what to say\". Otherwise to continue the conversation, write: SAY: \"what to say next\"\\n\\n'\n",
" )\n",
" full_result = self._generate_reaction(observation, call_to_action_template)\n",
" result = full_result.strip().split('\\n')[0]\n",
" if \"GOODBYE:\" in result:\n",
" farewell = result.split(\"GOODBYE:\")[-1].strip()\n",
" self.add_memory(f\"{self.name} observed {observation} and said {farewell}\")\n",
" return False, f\"{self.name} said {farewell}\"\n",
" if \"SAY:\" in result:\n",
" response_text = result.split(\"SAY:\")[-1].strip()\n",
" self.add_memory(f\"{self.name} observed {observation} and said {response_text}\")\n",
" return True, f\"{self.name} said {response_text}\"\n",
" else:\n",
" return False, result"
],
"metadata": {
"id": "iHK40XtGVLzd"
},
"execution_count": 9,
"outputs": []
},
{
"cell_type": "code",
"source": [
"import math\n",
"import faiss\n",
"\n",
"def relevance_score_fn(score: float) -> float:\n",
" \"\"\"Return a similarity score on a scale [0, 1].\"\"\"\n",
" # This will differ depending on a few things:\n",
" # - the distance / similarity metric used by the VectorStore\n",
" # - the scale of your embeddings (OpenAI's are unit norm. Many others are not!)\n",
" # This function converts the euclidean norm of normalized embeddings\n",
" # (0 is most similar, sqrt(2) most dissimilar)\n",
" # to a similarity function (0 to 1)\n",
" return 1.0 - score / math.sqrt(2)\n",
"\n",
"def create_new_memory_retriever():\n",
" \"\"\"Create a new vector store retriever unique to the agent.\"\"\"\n",
" # Define your embedding model\n",
" embeddings_model = OpenAIEmbeddings()\n",
" # Initialize the vectorstore as empty\n",
" embedding_size = 1536\n",
" index = faiss.IndexFlatL2(embedding_size)\n",
" vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {}, relevance_score_fn=relevance_score_fn)\n",
" return TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, other_score_keys=[\"importance\"], k=15)"
],
"metadata": {
"id": "xVZOCVj8VnI0"
},
"execution_count": 10,
"outputs": []
},
{
"cell_type": "code",
"source": [
"tommie = GenerativeAgent(name=\"Tommie\", \n",
" age=25,\n",
" traits=\"anxious, likes design\", # You can add more persistent traits here \n",
" status=\"looking for a job\", # When connected to a virtual world, we can have the characters update their status\n",
" memory_retriever=create_new_memory_retriever(),\n",
" llm=LLM,\n",
" daily_summaries = [\n",
" \"Drove across state to move to a new town but doesn't have a job yet.\"\n",
" ],\n",
" reflection_threshold = 8, # we will give this a relatively low number to show how reflection works\n",
" )"
],
"metadata": {
"id": "dRn0xZC4Vp4N"
},
"execution_count": 11,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# The current \"Summary\" of a character can't be made because the agent hasn't made\n",
"# any observations yet.\n",
"print(tommie.get_summary())"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "1GKcV_20WuYP",
"outputId": "4e367994-9303-4a50-b668-dd82d30b8bd8"
},
"execution_count": 12,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Name: Tommie (age: 25)\n",
"Innate traits: anxious, likes design\n",
"Unfortunately, there are no statements provided to summarize Tommie's core characteristics.\n"
]
}
]
},
{
"cell_type": "code",
"source": [
"# We can give the character memories directly\n",
"# 메모리에 집어넣을 기억을 정의하고, 입력\n",
"# personal free account의 경우 ratelimit 걸려서 제대로 테스트하기 어렵다.\n",
"tommie_memories = [\n",
" \"Tommie remembers his dog, Bruno, from when he was a kid\",\n",
" \"Tommie feels tired from driving so far\",\n",
" \"Tommie sees the new home\",\n",
" \"The new neighbors have a cat\",\n",
" \"The road is noisy at night\",\n",
" \"Tommie is hungry\",\n",
" \"Tommie tries to get some rest.\",\n",
"]\n",
"for memory in tommie_memories:\n",
" tommie.add_memory(memory)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 446
},
"id": "ghTyWooPWxJn",
"outputId": "7c263095-3570-4b91-97d3-c4691787992b"
},
"execution_count": 13,
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
"WARNING:langchain.chat_models.openai:Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised RateLimitError: Rate limit reached for default-gpt-3.5-turbo in organization org-mfNrETunpMAJBlt7q8zkKlhK on requests per min. Limit: 3 / min. Please try again in 20s. Contact [email protected] if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method..\n",
"WARNING:langchain.chat_models.openai:Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 2.0 seconds as it raised RateLimitError: Rate limit reached for default-gpt-3.5-turbo in organization org-mfNrETunpMAJBlt7q8zkKlhK on requests per min. Limit: 3 / min. Please try again in 20s. Contact [email protected] if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method..\n",
"WARNING:langchain.chat_models.openai:Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for default-gpt-3.5-turbo in organization org-mfNrETunpMAJBlt7q8zkKlhK on requests per min. Limit: 3 / min. Please try again in 20s. Contact [email protected] if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method..\n",
"WARNING:langchain.chat_models.openai:Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 8.0 seconds as it raised RateLimitError: Rate limit reached for default-gpt-3.5-turbo in organization org-mfNrETunpMAJBlt7q8zkKlhK on requests per min. Limit: 3 / min. Please try again in 20s. Contact [email protected] if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method..\n"
]
},
{
"output_type": "error",
"ename": "KeyboardInterrupt",
"evalue": "ignored",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-13-83581d8c1085>\u001b[0m in \u001b[0;36m<cell line: 12>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 11\u001b[0m ]\n\u001b[1;32m 12\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mmemory\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mtommie_memories\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 13\u001b[0;31m \u001b[0mtommie\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0madd_memory\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmemory\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[0;32m<ipython-input-9-9329484fef9b>\u001b[0m in \u001b[0;36madd_memory\u001b[0;34m(self, memory_content)\u001b[0m\n\u001b[1;32m 119\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0madd_memory\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmemory_content\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mstr\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0mList\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mstr\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 120\u001b[0m \u001b[0;34m\"\"\"Add an observation or memory to the agent's memory.\"\"\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 121\u001b[0;31m \u001b[0mimportance_score\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_score_memory_importance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmemory_content\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 122\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmemory_importance\u001b[0m \u001b[0;34m+=\u001b[0m \u001b[0mimportance_score\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 123\u001b[0m \u001b[0mdocument\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mDocument\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mpage_content\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mmemory_content\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmetadata\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m{\u001b[0m\u001b[0;34m\"importance\"\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mimportance_score\u001b[0m\u001b[0;34m}\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m<ipython-input-9-9329484fef9b>\u001b[0m in \u001b[0;36m_score_memory_importance\u001b[0;34m(self, memory_content, weight)\u001b[0m\n\u001b[1;32m 110\u001b[0m )\n\u001b[1;32m 111\u001b[0m \u001b[0mchain\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mLLMChain\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mllm\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mllm\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mprompt\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mprompt\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mverbose\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mverbose\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 112\u001b[0;31m \u001b[0mscore\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mchain\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrun\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmemory_content\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mmemory_content\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstrip\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 113\u001b[0m \u001b[0mmatch\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mre\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msearch\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34mr\"^\\D*(\\d+)\"\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mscore\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 114\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mmatch\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py\u001b[0m in \u001b[0;36mrun\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m 214\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 215\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mkwargs\u001b[0m \u001b[0;32mand\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0margs\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 216\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0moutput_keys\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 217\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 218\u001b[0m raise ValueError(\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, inputs, return_only_outputs)\u001b[0m\n\u001b[1;32m 114\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0mKeyboardInterrupt\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mException\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 115\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcallback_manager\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mon_chain_error\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0me\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mverbose\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mverbose\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 116\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 117\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcallback_manager\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mon_chain_end\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0moutputs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mverbose\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mverbose\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 118\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mprep_outputs\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minputs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0moutputs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mreturn_only_outputs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, inputs, return_only_outputs)\u001b[0m\n\u001b[1;32m 111\u001b[0m )\n\u001b[1;32m 112\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 113\u001b[0;31m \u001b[0moutputs\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minputs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 114\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0mKeyboardInterrupt\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mException\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 115\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcallback_manager\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mon_chain_error\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0me\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mverbose\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mverbose\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py\u001b[0m in \u001b[0;36m_call\u001b[0;34m(self, inputs)\u001b[0m\n\u001b[1;32m 55\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 56\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minputs\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mDict\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mstr\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mAny\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0mDict\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mstr\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstr\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 57\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mapply\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0minputs\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 58\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 59\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mgenerate\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput_list\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mList\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mDict\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mstr\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mAny\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0mLLMResult\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py\u001b[0m in \u001b[0;36mapply\u001b[0;34m(self, input_list)\u001b[0m\n\u001b[1;32m 116\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mapply\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput_list\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mList\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mDict\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mstr\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mAny\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0mList\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mDict\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mstr\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstr\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 117\u001b[0m \u001b[0;34m\"\"\"Utilize the LLM generate method for speed gains.\"\"\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 118\u001b[0;31m \u001b[0mresponse\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgenerate\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput_list\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 119\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcreate_outputs\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mresponse\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 120\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py\u001b[0m in \u001b[0;36mgenerate\u001b[0;34m(self, input_list)\u001b[0m\n\u001b[1;32m 60\u001b[0m \u001b[0;34m\"\"\"Generate LLM result from inputs.\"\"\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 61\u001b[0m \u001b[0mprompts\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstop\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mprep_prompts\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput_list\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 62\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mllm\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgenerate_prompt\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mprompts\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstop\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 63\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 64\u001b[0m \u001b[0;32masync\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0magenerate\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput_list\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mList\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mDict\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mstr\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mAny\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0mLLMResult\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/langchain/chat_models/base.py\u001b[0m in \u001b[0;36mgenerate_prompt\u001b[0;34m(self, prompts, stop)\u001b[0m\n\u001b[1;32m 80\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0mKeyboardInterrupt\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mException\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 81\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcallback_manager\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mon_llm_error\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0me\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mverbose\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mverbose\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 82\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 83\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcallback_manager\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mon_llm_end\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0moutput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mverbose\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mverbose\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 84\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0moutput\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/langchain/chat_models/base.py\u001b[0m in \u001b[0;36mgenerate_prompt\u001b[0;34m(self, prompts, stop)\u001b[0m\n\u001b[1;32m 77\u001b[0m )\n\u001b[1;32m 78\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 79\u001b[0;31m \u001b[0moutput\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgenerate\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mprompt_messages\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstop\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mstop\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 80\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0mKeyboardInterrupt\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mException\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 81\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcallback_manager\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mon_llm_error\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0me\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mverbose\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mverbose\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/langchain/chat_models/base.py\u001b[0m in \u001b[0;36mgenerate\u001b[0;34m(self, messages, stop)\u001b[0m\n\u001b[1;32m 52\u001b[0m ) -> LLMResult:\n\u001b[1;32m 53\u001b[0m \u001b[0;34m\"\"\"Top Level call\"\"\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 54\u001b[0;31m \u001b[0mresults\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_generate\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mm\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstop\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mstop\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mm\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mmessages\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 55\u001b[0m \u001b[0mllm_output\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_combine_llm_outputs\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mres\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mllm_output\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mres\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mresults\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 56\u001b[0m \u001b[0mgenerations\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0mres\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgenerations\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mres\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mresults\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/langchain/chat_models/base.py\u001b[0m in \u001b[0;36m<listcomp>\u001b[0;34m(.0)\u001b[0m\n\u001b[1;32m 52\u001b[0m ) -> LLMResult:\n\u001b[1;32m 53\u001b[0m \u001b[0;34m\"\"\"Top Level call\"\"\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 54\u001b[0;31m \u001b[0mresults\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_generate\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mm\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstop\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mstop\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mm\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mmessages\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 55\u001b[0m \u001b[0mllm_output\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_combine_llm_outputs\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mres\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mllm_output\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mres\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mresults\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 56\u001b[0m \u001b[0mgenerations\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0mres\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgenerations\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mres\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mresults\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/langchain/chat_models/openai.py\u001b[0m in \u001b[0;36m_generate\u001b[0;34m(self, messages, stop)\u001b[0m\n\u001b[1;32m 264\u001b[0m )\n\u001b[1;32m 265\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mChatResult\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mgenerations\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mChatGeneration\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmessage\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mmessage\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 266\u001b[0;31m \u001b[0mresponse\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcompletion_with_retry\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmessages\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mmessage_dicts\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mparams\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 267\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_create_chat_result\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mresponse\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 268\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/langchain/chat_models/openai.py\u001b[0m in \u001b[0;36mcompletion_with_retry\u001b[0;34m(self, **kwargs)\u001b[0m\n\u001b[1;32m 226\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mclient\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcreate\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 227\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 228\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0m_completion_with_retry\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 229\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 230\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_combine_llm_outputs\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mllm_outputs\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mList\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mOptional\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mdict\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0mdict\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/tenacity/__init__.py\u001b[0m in \u001b[0;36mwrapped_f\u001b[0;34m(*args, **kw)\u001b[0m\n\u001b[1;32m 287\u001b[0m \u001b[0;34m@\u001b[0m\u001b[0mfunctools\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mwraps\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mf\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 288\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mwrapped_f\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mt\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mAny\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkw\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mt\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mAny\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0mt\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mAny\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 289\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mf\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkw\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 290\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 291\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mretry_with\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mt\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mAny\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mt\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mAny\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0mWrappedFn\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/tenacity/__init__.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, fn, *args, **kwargs)\u001b[0m\n\u001b[1;32m 387\u001b[0m \u001b[0;32melif\u001b[0m \u001b[0misinstance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdo\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mDoSleep\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 388\u001b[0m \u001b[0mretry_state\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mprepare_for_next_attempt\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 389\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msleep\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdo\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 390\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 391\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mdo\u001b[0m \u001b[0;31m# type: ignore[no-any-return]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/tenacity/nap.py\u001b[0m in \u001b[0;36msleep\u001b[0;34m(seconds)\u001b[0m\n\u001b[1;32m 29\u001b[0m \u001b[0mThis\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0mthe\u001b[0m \u001b[0mdefault\u001b[0m \u001b[0mstrategy\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;32mand\u001b[0m \u001b[0mmay\u001b[0m \u001b[0mbe\u001b[0m \u001b[0mmocked\u001b[0m \u001b[0mout\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0munit\u001b[0m \u001b[0mtesting\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 30\u001b[0m \"\"\"\n\u001b[0;32m---> 31\u001b[0;31m \u001b[0mtime\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msleep\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mseconds\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 32\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 33\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mKeyboardInterrupt\u001b[0m: "
]
}
]
},
{
"cell_type": "code",
"source": [
"# Now that Tommie has 'memories', their self-summary is more descriptive, though still rudimentary.\n",
"# We will see how this summary updates after more observations to create a more rich description.\n",
"print(tommie.get_summary(force_refresh=True))"
],
"metadata": {
"id": "AKKsX-8NW4Nj"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"def interview_agent(agent: GenerativeAgent, message: str) -> str:\n",
" \"\"\"Help the notebook user interact with the agent.\"\"\"\n",
" new_message = f\"{USER_NAME} says {message}\"\n",
" return agent.generate_dialogue_response(new_message)[1]"
],
"metadata": {
"id": "JFymB1uc70wj"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"interview_agent(tommie, \"What do you like to do?\")"
],
"metadata": {
"id": "uqiIdv2_73mW"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"interview_agent(tommie, \"What are you looking forward to doing today?\")"
],
"metadata": {
"id": "YuDIIFtF75ZN"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"interview_agent(tommie, \"What are you most worried about today?\")"
],
"metadata": {
"id": "fpGHjLGl76rv"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's have Tommie start going through a day in the life.\n",
"# 어떤 상황에서 어떤 행동을 했는지 정보 제공\n",
"observations = [\n",
" \"Tommie wakes up to the sound of a noisy construction site outside his window.\",\n",
" \"Tommie gets out of bed and heads to the kitchen to make himself some coffee.\",\n",
" \"Tommie realizes he forgot to buy coffee filters and starts rummaging through his moving boxes to find some.\",\n",
" \"Tommie finally finds the filters and makes himself a cup of coffee.\",\n",
" \"The coffee tastes bitter, and Tommie regrets not buying a better brand.\",\n",
" \"Tommie checks his email and sees that he has no job offers yet.\",\n",
" \"Tommie spends some time updating his resume and cover letter.\",\n",
" \"Tommie heads out to explore the city and look for job openings.\",\n",
" \"Tommie sees a sign for a job fair and decides to attend.\",\n",
" \"The line to get in is long, and Tommie has to wait for an hour.\",\n",
" \"Tommie meets several potential employers at the job fair but doesn't receive any offers.\",\n",
" \"Tommie leaves the job fair feeling disappointed.\",\n",
" \"Tommie stops by a local diner to grab some lunch.\",\n",
" \"The service is slow, and Tommie has to wait for 30 minutes to get his food.\",\n",
" \"Tommie overhears a conversation at the next table about a job opening.\",\n",
" \"Tommie asks the diners about the job opening and gets some information about the company.\",\n",
" \"Tommie decides to apply for the job and sends his resume and cover letter.\",\n",
" \"Tommie continues his search for job openings and drops off his resume at several local businesses.\",\n",
" \"Tommie takes a break from his job search to go for a walk in a nearby park.\",\n",
" \"A dog approaches and licks Tommie's feet, and he pets it for a few minutes.\",\n",
" \"Tommie sees a group of people playing frisbee and decides to join in.\",\n",
" \"Tommie has fun playing frisbee but gets hit in the face with the frisbee and hurts his nose.\",\n",
" \"Tommie goes back to his apartment to rest for a bit.\",\n",
" \"A raccoon tore open the trash bag outside his apartment, and the garbage is all over the floor.\",\n",
" \"Tommie starts to feel frustrated with his job search.\",\n",
" \"Tommie calls his best friend to vent about his struggles.\",\n",
" \"Tommie's friend offers some words of encouragement and tells him to keep trying.\",\n",
" \"Tommie feels slightly better after talking to his friend.\",\n",
"]"
],
"metadata": {
"id": "FpIiAIgW7-GO"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's send Tommie on their way. We'll check in on their summary every few observations to watch it evolve\n",
"# 특정 환경을 제공하면, 어떤 반응이나 행동을 할 것인지 예측해서 알려줌\n",
"for i, observation in enumerate(observations):\n",
" _, reaction = tommie.generate_reaction(observation)\n",
" print(colored(observation, \"green\"), reaction)\n",
" if ((i+1) % 20) == 0:\n",
" print('*'*40)\n",
" print(colored(f\"After {i+1} observations, Tommie's summary is:\\n{tommie.get_summary(force_refresh=True)}\", \"blue\"))\n",
" print('*'*40)"
],
"metadata": {
"id": "sOIREnO68U1Q"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"interview_agent(tommie, \"Tell me about how your day has been going\")"
],
"metadata": {
"id": "maIrxOlG8ili"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"interview_agent(tommie, \"How do you feel about coffee?\")"
],
"metadata": {
"id": "hENd23yC8kkE"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"interview_agent(tommie, \"Tell me about your childhood dog!\")"
],
"metadata": {
"id": "zzVIp_lV8lqo"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"두 번째 객체로 Eve를 생성"
],
"metadata": {
"id": "qpeX_YH28qTv"
}
},
{
"cell_type": "code",
"source": [
"eve = GenerativeAgent(name=\"Eve\", \n",
" age=34, \n",
" traits=\"curious, helpful\", # You can add more persistent traits here \n",
" status=\"N/A\", # When connected to a virtual world, we can have the characters update their status\n",
" memory_retriever=create_new_memory_retriever(),\n",
" llm=LLM,\n",
" daily_summaries = [\n",
" (\"Eve started her new job as a career counselor last week and received her first assignment, a client named Tommie.\")\n",
" ],\n",
" reflection_threshold = 5,\n",
" )"
],
"metadata": {
"id": "X3scAfrD8m3v"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"yesterday = (datetime.now() - timedelta(days=1)).strftime(\"%A %B %d\")\n",
"eve_memories = [\n",
" \"Eve overhears her colleague say something about a new client being hard to work with\",\n",
" \"Eve wakes up and hear's the alarm\",\n",
" \"Eve eats a boal of porridge\",\n",
" \"Eve helps a coworker on a task\",\n",
" \"Eve plays tennis with her friend Xu before going to work\",\n",
" \"Eve overhears her colleague say something about Tommie being hard to work with\",\n",
" \n",
"]\n",
"for memory in eve_memories:\n",
" eve.add_memory(memory)"
],
"metadata": {
"id": "ZikJRWE98uFd"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"print(eve.get_summary())"
],
"metadata": {
"id": "Wlym0hLW8vbc"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"interview_agent(eve, \"How are you feeling about today?\")"
],
"metadata": {
"id": "e4LYTcKh8xBN"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"interview_agent(eve, \"You'll have to ask him. He may be a bit anxious, so I'd appreciate it if you keep the conversation going and ask as many questions as possible.\")"
],
"metadata": {
"id": "pQOPCUx58y9t"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"## 두 명을 대화시키는 함수 생성\n",
"def run_conversation(agents: List[GenerativeAgent], initial_observation: str) -> None:\n",
" \"\"\"Runs a conversation between agents.\"\"\"\n",
" _, observation = agents[1].generate_reaction(initial_observation)\n",
" print(observation)\n",
" turns = 0\n",
" while True:\n",
" break_dialogue = False\n",
" for agent in agents:\n",
" stay_in_dialogue, observation = agent.generate_dialogue_response(observation)\n",
" print(observation)\n",
" # observation = f\"{agent.name} said {reaction}\"\n",
" if not stay_in_dialogue:\n",
" break_dialogue = True \n",
" if break_dialogue:\n",
" break\n",
" turns += 1"
],
"metadata": {
"id": "8GV_4eaf80MF"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"agents = [tommie, eve]\n",
"run_conversation(agents, \"Tommie said: Hi, Eve. Thanks for agreeing to share your story with me and give me advice. I have a bunch of questions.\")"
],
"metadata": {
"id": "-vinzOm-84QQ"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# We can see a current \"Summary\" of a character based on their own perception of self\n",
"# has changed\n",
"print(tommie.get_summary(force_refresh=True))"
],
"metadata": {
"id": "ER7-pW3686E8"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"print(eve.get_summary(force_refresh=True))"
],
"metadata": {
"id": "BjkoDDUY87LR"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"interview_agent(tommie, \"How was your conversation with Eve?\")"
],
"metadata": {
"id": "1pl5r4X_92Ij"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"interview_agent(eve, \"How was your conversation with Tommie?\")"
],
"metadata": {
"id": "OPT66fC393op"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"interview_agent(eve, \"What do you wish you would have said to Tommie?\")"
],
"metadata": {
"id": "wcy_500w93hF"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"interview_agent(tommie, \"What happened with your coffee this morning?\")"
],
"metadata": {
"id": "WpU6s56m97fj"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## 문서 QA"
],
"metadata": {
"id": "pelXSQZD98De"
}
},
{
"cell_type": "code",
"source": [
"!pip install pypdf"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "CAl2giQI-Ry6",
"outputId": "af86f2d0-d046-4e06-8cfb-be1cad43180e"
},
"execution_count": 18,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
"Collecting pypdf\n",
" Downloading pypdf-3.8.1-py3-none-any.whl (248 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m248.8/248.8 kB\u001b[0m \u001b[31m8.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: typing_extensions>=3.10.0.0 in /usr/local/lib/python3.9/dist-packages (from pypdf) (4.5.0)\n",
"Installing collected packages: pypdf\n",
"Successfully installed pypdf-3.8.1\n"
]
}
]
},
{
"cell_type": "code",
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.vectorstores import Chroma\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.document_loaders import PyPDFLoader\n",
"from langchain.llms import OpenAI\n",
"from langchain.chains import RetrievalQA"
],
"metadata": {
"id": "tTaNYPJZ9_aa"
},
"execution_count": 16,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# PDF 파일 경로 목록\n",
"pdf_files = [\n",
" \"docs/AF_회사소개서.pdf\",\n",
" \"docs/AF_러닝데이_overview.pdf\",\n",
" \"docs/인공지능팩토리_개발용역_실적_요약서.pdf\"\n",
"]\n",
"\n",
"# 로더, 텍스트 분할기 및 임베딩 초기화\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"embeddings = OpenAIEmbeddings()\n",
"\n",
"texts = []\n",
"\n",
"# 각 PDF 파일에 대해 작업 수행\n",
"for pdf_file in pdf_files:\n",
" loader = PyPDFLoader(pdf_file)\n",
" documents = loader.load()\n",
" texts.extend(text_splitter.split_documents(documents))\n",
"\n",
"# chroma라는 vector DB에 저장\n",
"docsearch = Chroma.from_documents(texts, embeddings)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 444
},
"id": "JNvsObaf9_rp",
"outputId": "95bb64c2-d2e0-4450-a7bd-9cbf30184069"
},
"execution_count": 17,
"outputs": [
{
"output_type": "error",
"ename": "ValueError",
"evalue": "ignored",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mModuleNotFoundError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/langchain/document_loaders/pdf.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, file_path)\u001b[0m\n\u001b[1;32m 86\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 87\u001b[0;31m \u001b[0;32mimport\u001b[0m \u001b[0mpypdf\u001b[0m \u001b[0;31m# noqa:F401\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 88\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mImportError\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mModuleNotFoundError\u001b[0m: No module named 'pypdf'",
"\nDuring handling of the above exception, another exception occurred:\n",
"\u001b[0;31mValueError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-17-d3881ec90308>\u001b[0m in \u001b[0;36m<cell line: 15>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 14\u001b[0m \u001b[0;31m# 각 PDF 파일에 대해 작업 수행\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 15\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mpdf_file\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mpdf_files\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 16\u001b[0;31m \u001b[0mloader\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mPyPDFLoader\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mpdf_file\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 17\u001b[0m \u001b[0mdocuments\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mloader\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mload\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 18\u001b[0m \u001b[0mtexts\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mextend\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtext_splitter\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msplit_documents\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdocuments\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.9/dist-packages/langchain/document_loaders/pdf.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, file_path)\u001b[0m\n\u001b[1;32m 87\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mpypdf\u001b[0m \u001b[0;31m# noqa:F401\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 88\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mImportError\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 89\u001b[0;31m raise ValueError(\n\u001b[0m\u001b[1;32m 90\u001b[0m \u001b[0;34m\"pypdf package not found, please install it with \"\u001b[0m \u001b[0;34m\"`pip install pypdf`\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 91\u001b[0m )\n",
"\u001b[0;31mValueError\u001b[0m: pypdf package not found, please install it with `pip install pypdf`"
]
}
]
},
{
"cell_type": "code",
"source": [
"# 질문이 오면, vector DB에 검색된 것을 기반으로 답변하도록 prompt 생성\n",
"from langchain.prompts import PromptTemplate\n",
"prompt_template = \"\"\"Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n",
"\n",
"{context}\n",
"\n",
"Question: {question}\n",
"Answer in Korean:\"\"\"\n",
"PROMPT = PromptTemplate(\n",
" template=prompt_template, input_variables=[\"context\", \"question\"]\n",
")"
],
"metadata": {
"id": "v0EM5_2t-B_u"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"chain_type_kwargs = {\"prompt\": PROMPT}\n",
"qa = RetrievalQA.from_chain_type(llm=OpenAI(), \n",
" chain_type=\"stuff\", \n",
" retriever=docsearch.as_retriever(), \n",
" return_source_documents=True, \n",
" chain_type_kwargs=chain_type_kwargs)"
],
"metadata": {
"id": "uxdzkhwq-l8F"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"\n",
"query = \"인공지능팩토리는 무엇을 하는 회사인가?\"\n",
"result = qa({\"query\": query})"
],
"metadata": {
"id": "ICVo8cxF-nbm"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"result[\"result\"]"
],
"metadata": {
"id": "ehvhmL_u-2DH"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"' 인공지능팩토리는 END-TO-END 서비스를 제공하는 토탈 AI 플랫폼 서비스를 제공하는 회사입니다.'"
],
"metadata": {
"id": "5H672OHT-00G"
}
},
{
"cell_type": "code",
"source": [
"result[\"source_documents\"]"
],
"metadata": {
"id": "kR66D0ZS-5FV"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"[Document(page_content='Company Profile\\n인공지능팩토리회사소개서', metadata={'source': 'docs/AF_회사소개서.pdf', 'page': 0}),\n",
" Document(page_content='인공지능팩토리는토탈AI\\x00플랫폼서비스입니다.AI 컨설팅부터AI 모델개발, 평가검증및유지보수관리에이르기까지END-TO-END 서비스를제공하며,인공지능경진대회플랫폼및교육실습플랫폼을발판으로AI 생태계를조성하고활성화시키고자합니다.40+경진대회90+태스크10K+보유회원30+고객사\\n0120.01~02-법인설립-기업부설연구소인증20.07~08-초기창업패키지선정-시드투자유치20.11-IITP 인공지능그랜드챌린지-NIPA 인공지능문제해결경진대회21.09~1122.0322.09*2022년4월기준\\n-창업진흥원2021 AI챔피언십-연구개발특구진흥재단경진대회-NIA 서비스공모전경진대회-Microsoft 애저톤경진대회-ETRI휴먼이해인공지능경진대회-KT-ETRI네트워크지능화를위한인공지능경진대회-농림수산식품교육문화정보원실증재배경진대회-한국가스공사주최경진대회운영', metadata={'source': 'docs/AF_회사소개서.pdf', 'page': 1}),\n",
" Document(page_content='05WHY AI FACTORY?전문성있는AI 컨설팅AI기술을도입하고싶지만, 어디부터어떻게적용해야할지막막한분들께인공지능팩토리는최고의파트너입니다. 업계전문가의수준높은컨설팅을통해, 문제정의부터최적화관리까지진정한의미의토탈솔루션을제공합니다.독보적인인공지능모델& 데이터평가검증역량인공지능팩토리는창업이후1년만에20개이상의고객사와함께경진대회개최및모델을개발검증해왔습니다. 다양한산업군경험과전문가기반의평가검증역량으로믿을수있는결과물을드립니다.마무리까지확실한유지보수관리인공지능팩토리는신뢰를중요시합니다. 단순히결과물을전달하는데서끝나지않고사후관리까지담당하여최적화된상태로인공지능모델이이용되도록합니다.', metadata={'source': 'docs/AF_회사소개서.pdf', 'page': 5}),\n",
" Document(page_content='다가올미래사회에서의인공지능의역할은더중요해질것으로예상되고있으며, 따라서인공지능모델개발수요역시급증할것으로기대되고있습니다. 그러나인공지능모델개발에성공했다고하더라도실제인공지능모델을운영하기위해서는다양하게고려해야할사항들이많습니다.\\n04ProblemSolution\\n인공지능팩토리는즉시성, 공유성, 실시간성, 상호작용성및집단지성등의특징을가지고있는토탈AI 플랫폼을제공하여,AI 기술을원하는분들이언제어디서든손쉽게인공지능서비스를이용할수있도록지원합니다.인공지능모델운영을위한다양한고려요소들인공지능을위한플랫폼토탈솔루션', metadata={'source': 'docs/AF_회사소개서.pdf', 'page': 4})]"
],
"metadata": {
"id": "yptwBeGp-5xH"
}
},
{
"cell_type": "code",
"source": [
"query = \"인공지능팩토리가 수행한 이미지 분석 관련 프로젝트는?\"\n",
"result = qa({\"query\": query})"
],
"metadata": {
"id": "U8YwMTjR-7ss"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"result[\"result\"]\n"
],
"metadata": {
"id": "5zO9RgFqgiPU"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"' 과제명 2020년 미디어 플랫폼 AI 관제시스템 개발'"
],
"metadata": {
"id": "eDVf55kugil-"
}
},
{
"cell_type": "code",
"source": [
"result[\"source_documents\"]"
],
"metadata": {
"id": "c0jstwoNgkT6"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"[Document(page_content='Company Profile\\n인공지능팩토리회사소개서', metadata={'source': 'docs/AF_회사소개서.pdf', 'page': 0}),\n",
" Document(page_content='인공지능팩토리는토탈AI\\x00플랫폼서비스입니다.AI 컨설팅부터AI 모델개발, 평가검증및유지보수관리에이르기까지END-TO-END 서비스를제공하며,인공지능경진대회플랫폼및교육실습플랫폼을발판으로AI 생태계를조성하고활성화시키고자합니다.40+경진대회90+태스크10K+보유회원30+고객사\\n0120.01~02-법인설립-기업부설연구소인증20.07~08-초기창업패키지선정-시드투자유치20.11-IITP 인공지능그랜드챌린지-NIPA 인공지능문제해결경진대회21.09~1122.0322.09*2022년4월기준\\n-창업진흥원2021 AI챔피언십-연구개발특구진흥재단경진대회-NIA 서비스공모전경진대회-Microsoft 애저톤경진대회-ETRI휴먼이해인공지능경진대회-KT-ETRI네트워크지능화를위한인공지능경진대회-농림수산식품교육문화정보원실증재배경진대회-한국가스공사주최경진대회운영', metadata={'source': 'docs/AF_회사소개서.pdf', 'page': 1}),\n",
" Document(page_content='사업명 -\\n과제명 2020년미디어플랫폼 AI 관제시스템개발\\n발주기관 KT주관기관 ㈜인공지능팩토리\\n공동개발기관 ㈜제로원에이아이\\n사업기간❖[1차] 2020년01월28일~ 2020년06월30일\\n❖[2차] 2020년08월19일~ 2021년02월12일\\n사업목표❖AI 기반장애예측을통한선제적예방및신속한인조/조치체계구축\\n❖자율형네트워크별 데이터및AI 모델버전관리체계구축\\n사업내용❖신속한장애판단및예측으로 대형고장 예방및운용생산성향상\\n❖대형고장 선제적대응필요및감시체계정교화를 위한딥러닝기반의미디어\\nAI 관제시스템개발\\n➢다양한서비스를 보유하고 있는고객사를 위해실시간으로 시계열로그이\\n상징후를 감지할수있는알고리즘 개발및운영을위한플랫폼을 제공함 .\\n➢이전에는 사람이직접개입했던 모니터링 시스템을 딥러닝기반의시스템\\n을통해신속한장애판단 및예측으로 대형고장예측및운용생산성을 향\\n상시킴 .\\n< 로그인화면 >\\n< 소프트웨어 구성도 >\\n< 하드웨어 구성도 >\\n< 메인화면 >', metadata={'source': 'docs/인공지능팩토리_개발용역_실적_요약서.pdf', 'page': 3}),\n",
" Document(page_content='다가올미래사회에서의인공지능의역할은더중요해질것으로예상되고있으며, 따라서인공지능모델개발수요역시급증할것으로기대되고있습니다. 그러나인공지능모델개발에성공했다고하더라도실제인공지능모델을운영하기위해서는다양하게고려해야할사항들이많습니다.\\n04ProblemSolution\\n인공지능팩토리는즉시성, 공유성, 실시간성, 상호작용성및집단지성등의특징을가지고있는토탈AI 플랫폼을제공하여,AI 기술을원하는분들이언제어디서든손쉽게인공지능서비스를이용할수있도록지원합니다.인공지능모델운영을위한다양한고려요소들인공지능을위한플랫폼토탈솔루션', metadata={'source': 'docs/AF_회사소개서.pdf', 'page': 4})]"
],
"metadata": {
"id": "Kzm5O-Nmgnhn"
}
},
{
"cell_type": "code",
"source": [
"query = \"파트너는 누구인가?\"\n",
"result = qa({\"query\": query})\n"
],
"metadata": {
"id": "0fhWsVq3goJY"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"result[\"result\"]"
],
"metadata": {
"id": "8nYpq-gagrNz"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"' 휴먼플래닛이 인공지능팩토리의 파트너입니다.'"
],
"metadata": {
"id": "rAQ-NDJUgtat"
}
},
{
"cell_type": "code",
"source": [
"result[\"source_documents\"]"
],
"metadata": {
"id": "2sTtFr05gvsA"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"[Document(page_content='인공지능팩토리의파트너\\n02\\n휴먼플래닛', metadata={'source': 'docs/AF_회사소개서.pdf', 'page': 2}),\n",
" Document(page_content='Company Profile\\n인공지능팩토리회사소개서', metadata={'source': 'docs/AF_회사소개서.pdf', 'page': 0}),\n",
" Document(page_content='05WHY AI FACTORY?전문성있는AI 컨설팅AI기술을도입하고싶지만, 어디부터어떻게적용해야할지막막한분들께인공지능팩토리는최고의파트너입니다. 업계전문가의수준높은컨설팅을통해, 문제정의부터최적화관리까지진정한의미의토탈솔루션을제공합니다.독보적인인공지능모델& 데이터평가검증역량인공지능팩토리는창업이후1년만에20개이상의고객사와함께경진대회개최및모델을개발검증해왔습니다. 다양한산업군경험과전문가기반의평가검증역량으로믿을수있는결과물을드립니다.마무리까지확실한유지보수관리인공지능팩토리는신뢰를중요시합니다. 단순히결과물을전달하는데서끝나지않고사후관리까지담당하여최적화된상태로인공지능모델이이용되도록합니다.', metadata={'source': 'docs/AF_회사소개서.pdf', 'page': 5}),\n",
" Document(page_content='사업명 -\\n과제명 입자결정 3차원분석모델및Crack 검출모델개발\\n발주기관 LG화학주관기관 ㈜인공지능팩토리\\n공동개발기관 -\\n사업기간 ❖2021년03월15일~ 2021년09월01일\\n사업목표❖[입자 3차원분석모델] 이온통로가되는음극재입자의벡터기반방향성정보\\n를계량화하고 이를시각화하는 모듈개발\\n❖[입자 Crack 검출] 셀의표본이미지로부터 정상입자와균열이발생한입자를\\n탐지하여 계량적지표를산출할수있는 AI 모델개발\\n사업내용❖입자 3차원분석모델개발\\n➢이온통로가되는음극재입자의벡터기반방향성정보를계량화하고 이를\\n시각화하는 모듈개발\\n■3D 극성히스토그램 시각화모듈개발\\n■벡터그룹이 3D 공간에서 가리키는 위치와선호하는 방향이있는지를\\n연구자가 직관적으로 파악할수있도록결과물을 빠르고손쉽게시각화\\n■Matlab 코드(https://github.com/NREL/FIB -SEM -EBSD -particle -scale -\\nanalysis )를실행파일로 생생한뒤, 이를 Python에서구동하도록 설계. \\n즉, Matlab 코드를코어로모듈화시키고데이터입력및구동처리는\\nPython으로개발\\n❖입자 Crack 검출모델개발\\n➢셀의표본이미지로부터 정상입자와균열이발생한입자를탐지하여 계량\\n적지표를산출할수있는 AI 모델개발\\n■입력된이미지가운데정상입자와균열입자를검출\\n■이미지내입자들을 모두탐색한다음해당입자내에균열이존재하는\\n지유무를판단하는 Two-step 방식을활용 (입자 Segmentation → 균열\\n입자판단)\\n●Semantic Segmentation 방식을활용하여 전체입자검출\\n●후처리를 통해검출된입자영역가운데균열이있는영역구분\\n❖인터페이스 구성\\n➢사용이용이하도록 목적에맞는인터페이스 구성\\n■Drag & Drop 방식의개별파일검출용 Web UI 인터페이스\\n■다수데이터일괄처리에용이한 OS 커맨드라인 API 등\\n< 입자 3차원분석 >\\n< 입자 Crack 검출 >', metadata={'source': 'docs/인공지능팩토리_개발용역_실적_요약서.pdf', 'page': 8})]"
],
"metadata": {
"id": "LosfdgbQgxVm"
}
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "rLJmbmfIgxvS"
},
"execution_count": null,
"outputs": []
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment